[Openstack] Getting Havana to work with Neutron, ML2 and VXLAN (vif_type=binding_failed error)

Tom Verdaat tom at tomverdaat.com
Thu Apr 10 07:16:40 UTC 2014


Thanks all for your help. I managed to get things working, kind of.

What did the trick was a combination of two things:
1) making sure the brige_mappings, bridge_uplinks and public subnet
settings were correct
2) removing br-ex from /etc/network/interfaces and letting openvswitch
create the br-ex bridge

Traffic flows between instances and the outside world are fine now even
though openvswicht (and thus neutron and the horizon dashboard) are still
reporting qg-* ports on br-ex and qr-* ports on br-int to be down. Does
anybody know why that is?

Cheers,

Tom



2014-03-31 3:10 GMT+02:00 Robert Kukura <kukura at noironetworks.com>:

>
> On 3/27/14, 7:15 PM, Tom Verdaat wrote:
>
>  Thanks for your reply.
>
> I checked that. Followed these instructions<https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/?answer=7259#post-id-7259>too. Using the vxlan typedriver and openvswitch mechanism driver. Agents
> are up on both the networking node and compute node.
>
>  Pasted the agent information and configuration files below. vm-1 is the
> networking node, mv-4 the compute node. Tried configuring a public network
> for floating IP's but I get the same problem with a simple internal tenant
> network.
>
>  Any idea why it won't work?
>
> If you run "nova service-list" as admin, do you see the nova-compute
> services with the exact same host IDs ("vm-1" and "vm-4") as the L2 agents?
>
> Also, not sure if this is related, but I noticed that l2_population is
> True on vm-1, and False on vm-4. Both these should be True if the
> l2population mechanism driver is enabled on the server, and False otherwise.
>
> -Bob
>
>
>
>  Thanks,]
>
>  Tom
> ---
>
> # neutron agent-list
>
> +--------------------------------------+--------------------+------+-------+----------------+
> | id                                   | agent_type         | host | alive
> | admin_state_up |
>
> +--------------------------------------+--------------------+------+-------+----------------+
> | e3b5c1b3-307c-4c27-8ea2-30e253da776b | Loadbalancer agent | vm-1 | :-)
> | True           |
> | 8aa54d4a-27db-4ec9-a49f-0766479ce35c | Metering agent     | vm-1 | :-)
> | True           |
> | e63e47c7-b78c-42b3-b28e-62e6ada26be2 | DHCP agent         | vm-1 | :-)
> | True           |
> | 994c8598-ee7d-46ae-9681-b7718758199c | L3 agent           | vm-1 | :-)
> | True           |
> | c26c302c-19a1-44af-b941-061a11e559eb | Open vSwitch agent | vm-1 | :-)
> | True           |
> | d80bac01-f651-4ad3-8564-d6933ee9a919 | Open vSwitch agent | vm-4 | :-)
> | True           |
>
> +--------------------------------------+--------------------+------+-------+----------------+
>
>
> # neutron agent-show 994c8598-ee7d-46ae-9681-b7718758199c
>
> +---------------------+-------------------------------------------------------------------------------+
> | Field               |
> Value
> |
>
> +---------------------+-------------------------------------------------------------------------------+
> | admin_state_up      |
> True
> |
> | agent_type          | L3
> agent                                                                      |
> | alive               |
> True
> |
> | binary              |
> neutron-l3-agent
> |
> | configurations      |
> {
> |
> |                     |      "router_id":
> "",                                                         |
> |                     |      "gateway_external_network_id":
> "",                                       |
> |                     |      "handle_internal_only_routers":
> true,                                    |
> |                     |      "use_namespaces":
> true,                                                  |
> |                     |      "routers":
> 0,                                                            |
> |                     |      "interfaces":
> 0,                                                         |
> |                     |      "floating_ips":
> 0,                                                       |
> |                     |      "interface_driver":
> "neutron.agent.linux.interface.OVSInterfaceDriver",  |
> |                     |      "ex_gw_ports":
> 0                                                         |
> |                     |
> }
> |
> | created_at          | 2014-03-27
> 00:34:10.528993                                                    |
> | description
> |
> |
> | heartbeat_timestamp | 2014-03-27
> 09:47:34.777233                                                    |
> | host                |
> vm-1
> |
> | id                  |
> 994c8598-ee7d-46ae-9681-b7718758199c
> |
> | started_at          | 2014-03-27
> 08:47:30.366770                                                    |
> | topic               |
> l3_agent
> |
>
> +---------------------+-------------------------------------------------------------------------------+
>
>
> # neutron agent-show c26c302c-19a1-44af-b941-061a11e559eb
> +---------------------+--------------------------------------+
> | Field               | Value                                |
> +---------------------+--------------------------------------+
> | admin_state_up      | True                                 |
> | agent_type          | Open vSwitch agent                   |
> | alive               | True                                 |
> | binary              | neutron-openvswitch-agent            |
> | configurations      | {                                    |
> |                     |      "tunnel_types": [               |
> |                     |           "vxlan"                    |
> |                     |      ],                              |
> |                     |      "tunneling_ip": "10.12.0.20",   |
> |                     |      "bridge_mappings": {},          |
> |                     |      "l2_population": true,          |
> |                     |      "devices": 0                    |
> |                     | }                                    |
> | created_at          | 2014-03-27 00:35:17.555910           |
> | description         |                                      |
> | heartbeat_timestamp | 2014-03-27 09:33:35.133166           |
> | host                | vm-1                                 |
> | id                  | c26c302c-19a1-44af-b941-061a11e559eb |
> | started_at          | 2014-03-27 00:35:22.920690           |
> | topic               | N/A                                  |
> +---------------------+--------------------------------------+
>
>
> # neutron agent-show d80bac01-f651-4ad3-8564-d6933ee9a919
> +---------------------+--------------------------------------+
> | Field               | Value                                |
> +---------------------+--------------------------------------+
> | admin_state_up      | True                                 |
> | agent_type          | Open vSwitch agent                   |
> | alive               | True                                 |
> | binary              | neutron-openvswitch-agent            |
> | configurations      | {                                    |
> |                     |      "tunnel_types": [               |
> |                     |           "vxlan"                    |
> |                     |      ],                              |
> |                     |      "tunneling_ip": "10.12.0.23",   |
> |                     |      "bridge_mappings": {},          |
> |                     |      "l2_population": false,         |
> |                     |      "devices": 0                    |
> |                     | }                                    |
> | created_at          | 2014-03-27 09:13:24.616733           |
> | description         |                                      |
> | heartbeat_timestamp | 2014-03-27 09:33:44.360747           |
> | host                | vm-4                                 |
> | id                  | d80bac01-f651-4ad3-8564-d6933ee9a919 |
> | started_at          | 2014-03-27 09:13:24.616733           |
> | topic               | N/A                                  |
> +---------------------+--------------------------------------+
>
>
>
> ==============================================================================================================================================================
> /etc/neutron/neutron.conf
>
> ==============================================================================================================================================================
>
> [DEFAULT]
> # Default log level is INFO
> # verbose and debug has the same result.
> # One of them will set DEBUG log level output
> # debug = False
> debug = False
> # verbose = False
> verbose = False
>
> # Where to store Neutron state files.  This directory must be writable by
> the
> # user executing the agent.
> state_path = /var/lib/neutron
>
> # Where to store lock files
> lock_path = $state_path/lock
>
> # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
> # log_date_format = %Y-%m-%d %H:%M:%S
>
> # use_syslog                           -> syslog
> # log_file and log_dir                 -> log_dir/log_file
> # (not log_file) and log_dir           -> log_dir/{binary_name}.log
> # use_stderr                           -> stderr
> # (not user_stderr) and (not log_file) -> stdout
> # publish_errors                       -> notification system
>
> # use_syslog = False
> use_syslog = False
> # syslog_log_facility = LOG_USER
>
> # use_stderr = True
> # log_file =
> # log_dir =
> log_dir =/var/log/neutron
>
> # publish_errors = False
>
> # Address to bind the API server
> # bind_host = 0.0.0.0
> bind_host = 0.0.0.0
>
> # Port the bind the API server to
> # bind_port = 9696
> bind_port = 9696
>
> # Path to the extensions.  Note that this can be a colon-separated list of
> # paths.  For example:
> # api_extensions_path =
> extensions:/path/to/more/extensions:/even/more/extensions
> # The __path__ of neutron.extensions is appended to this, so if your
> # extensions are in there you don't need to specify them here
> # api_extensions_path =
>
> # Neutron plugin provider module
> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
>
> # Advanced service modules
> # service_plugins =
> service_plugins
> =neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.vpn.plugin.VPNDriverPlugin,neutron.services.metering.metering_plugin.MeteringPlugin
>
> # Paste configuration file
> # api_paste_config = api-paste.ini
>
> # The strategy to be used for auth.
> # Supported values are 'keystone'(default), 'noauth'.
> # auth_strategy = keystone
> auth_strategy = keystone
>
> # Base MAC address. The first 3 octets will remain unchanged. If the
> # 4h octet is not 00, it will also used. The others will be
> # randomly generated.
> # 3 octet
> # base_mac = fa:16:3e:00:00:00
> base_mac = fa:16:3e:00:00:00
> # 4 octet
> # base_mac = fa:16:3e:4f:00:00
>
> # Maximum amount of retries to generate a unique MAC address
> # mac_generation_retries = 16
> mac_generation_retries = 16
>
> # DHCP Lease duration (in seconds)
> # dhcp_lease_duration = 86400
> dhcp_lease_duration = 120
>
> # Allow sending resource operation notification to DHCP agent
> # dhcp_agent_notification = True
>
> # Enable or disable bulk create/update/delete operations
> # allow_bulk = True
> allow_bulk = True
> # Enable or disable pagination
> # allow_pagination = False
> # Enable or disable sorting
> # allow_sorting = False
> # Enable or disable overlapping IPs for subnets
> # Attention: the following parameter MUST be set to False if Neutron is
> # being used in conjunction with nova security groups
> # allow_overlapping_ips = False
> allow_overlapping_ips = False
> # Ensure that configured gateway is on subnet
> # force_gateway_on_subnet = False
>
>
> # RPC configuration options. Defined in rpc __init__
> # The messaging module to use, defaults to kombu.
> # rpc_backend = neutron.openstack.common.rpc.impl_kombu
> rpc_backend = neutron.openstack.common.rpc.impl_kombu
> # Size of RPC thread pool
> # rpc_thread_pool_size = 64
> # Size of RPC connection pool
> # rpc_conn_pool_size = 30
> # Seconds to wait for a response from call or multicall
> # rpc_response_timeout = 60
> # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
> # rpc_cast_timeout = 30
> # Modules of exceptions that are permitted to be recreated
> # upon receiving exception data from an rpc call.
> # allowed_rpc_exception_modules = neutron.openstack.common.exception,
> nova.exception
> # AMQP exchange to connect to if using RabbitMQ or QPID
> # control_exchange = neutron
> control_exchange = neutron
>
> # If passed, use a fake RabbitMQ provider
> # fake_rabbit = False
>
> # Configuration options if sending notifications via kombu rpc (these are
> # the defaults)
> # SSL version to use (valid only if SSL enabled)
> # kombu_ssl_version =
> # SSL key file (valid only if SSL enabled)
> # kombu_ssl_keyfile =
> # SSL cert file (valid only if SSL enabled)
> # kombu_ssl_certfile =
> # SSL certification authority file (valid only if SSL enabled)'
> # kombu_ssl_ca_certs =
> # IP address of the RabbitMQ installation
> # rabbit_host = localhost
> rabbit_host = 127.0.0.1
> # Password of the RabbitMQ server
> # rabbit_password = guest
> rabbit_password = <<removed>>
> # Port where RabbitMQ server is running/listening
> # rabbit_port = 5672
> rabbit_port = 5672
> # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672,
> host2:5672)
> # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'
> # rabbit_hosts = localhost:5672
> rabbit_hosts = 127.0.0.1:5672
> # User ID used for RabbitMQ connections
> # rabbit_userid = guest
> rabbit_userid = openstack
> # Location of a virtual RabbitMQ installation.
> # rabbit_virtual_host = /
> rabbit_virtual_host = /
> # Maximum retries with trying to connect to RabbitMQ
> # (the default of 0 implies an infinite retry count)
> # rabbit_max_retries = 0
> # RabbitMQ connection retry interval
> # rabbit_retry_interval = 1
> # Use HA queues in RabbitMQ (x-ha-policy: all).You need to
> # wipe RabbitMQ database when changing this option. (boolean value)
> # rabbit_ha_queues = false
> rabbit_ha_queues = False
>
> # QPID
> # rpc_backend=neutron.openstack.common.rpc.impl_qpid
> # Qpid broker hostname
> # qpid_hostname = localhost
> # Qpid broker port
> # qpid_port = 5672
> # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
> # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'
> # qpid_hosts = localhost:5672
> # Username for qpid connection
> # qpid_username = ''
> # Password for qpid connection
> # qpid_password = ''
> # Space separated list of SASL mechanisms to use for auth
> # qpid_sasl_mechanisms = ''
> # Seconds between connection keepalive heartbeats
> # qpid_heartbeat = 60
> # Transport to use, either 'tcp' or 'ssl'
> # qpid_protocol = tcp
> # Disable Nagle algorithm
> # qpid_tcp_nodelay = True
>
> # ZMQ
> # rpc_backend=neutron.openstack.common.rpc.impl_zmq
> # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or
> IP.
> # The "host" option should point or resolve to this address.
> # rpc_zmq_bind_address = *
>
> # ============ Notification System Options =====================
>
> # Notifications can be sent when network/subnet/port are create, updated
> or deleted.
> # There are three methods of sending notifications: logging (via the
> # log_file directive), rpc (via a message queue) and
> # noop (no notifications sent, the default)
>
> # Notification_driver can be defined multiple times
> # Do nothing driver
> # notification_driver = neutron.openstack.common.notifier.no_op_notifier
> # Logging driver
> # notification_driver = neutron.openstack.common.notifier.log_notifier
> # RPC driver. DHCP agents needs it.
> notification_driver = neutron.openstack.common.notifier.rpc_notifier
>
> # default_notification_level is used to form actual topic name(s) or to
> set logging level
> # default_notification_level = INFO
>
> # default_publisher_id is a part of the notification payload
> # host = myhost.com
> # default_publisher_id = $host
>
> # Defined in rpc_notifier, can be comma separated values.
> # The actual topic names will be %s.%(default_notification_level)s
> # notification_topics = notifications
>
> # Default maximum number of items returned in a single response,
> # value == infinite and value < 0 means no max limit, and value must
> # greater than 0. If the number of items requested is greater than
> # pagination_max_limit, server will just return pagination_max_limit
> # of number of items.
> # pagination_max_limit = -1
>
> # Maximum number of DNS nameservers per subnet
> # max_dns_nameservers = 5
>
> # Maximum number of host routes per subnet
> # max_subnet_host_routes = 20
>
> # Maximum number of fixed ips per port
> # max_fixed_ips_per_port = 5
>
> # =========== items for agent management extension =============
> # Seconds to regard the agent as down; should be at least twice
> # report_interval, to be sure the agent is down for good
> # agent_down_time = 9
> agent_down_time = 9
> # ===========  end of items for agent management extension =====
>
> # =========== items for agent scheduler extension =============
> # Driver to use for scheduling network to DHCP agent
> # network_scheduler_driver =
> neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
> # Driver to use for scheduling router to a default L3 agent
> # router_scheduler_driver =
> neutron.scheduler.l3_agent_scheduler.ChanceScheduler
> router_scheduler_driver =
> neutron.scheduler.l3_agent_scheduler.ChanceScheduler
> # Driver to use for scheduling a loadbalancer pool to an lbaas agent
> # loadbalancer_pool_scheduler_driver =
> neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
>
> # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted
> # networks to first DHCP agent which sends get_active_networks message to
> # neutron server
> # network_auto_schedule = True
>
> # Allow auto scheduling routers to L3 agent. It will schedule non-hosted
> # routers to first L3 agent which sends sync_routers message to neutron
> server
> # router_auto_schedule = True
>
> # Number of DHCP agents scheduled to host a network. This enables redundant
> # DHCP agents for configured networks.
> # dhcp_agents_per_network = 1
> dhcp_agents_per_network = 1
>
> # ===========  end of items for agent scheduler extension =====
>
> # =========== WSGI parameters related to the API server ==============
> # Number of separate worker processes to spawn.  The default, 0, runs the
> # worker thread in the current process.  Greater than 0 launches that
> number of
> # child processes as workers.  The parent process manages them.
> # api_workers = 0
> api_workers = 0
> # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket
> when
> # starting API server. Not supported on OS X.
> # tcp_keepidle = 600
>
> # Number of seconds to keep retrying to listen
> # retry_until_window = 30
>
> # Number of backlog requests to configure the socket with.
> # backlog = 4096
>
> # Enable SSL on the API server
> # use_ssl = False
>
> # Certificate file to use when starting API server securely
> # ssl_cert_file = /path/to/certfile
>
> # Private key file to use when starting API server securely
> # ssl_key_file = /path/to/keyfile
>
> # CA certificate file to use when starting API server securely to
> # verify connecting clients. This is an optional parameter only required if
> # API clients need to authenticate to the API server using SSL certificates
> # signed by a trusted CA
> # ssl_ca_file = /path/to/cafile
> # ======== end of WSGI parameters related to the API server ==========
> report_interval=4
>
> [quotas]
> # resource name(s) that are supported in quota features
> # quota_items = network,subnet,port
>
> # default number of resource allowed per tenant, minus for unlimited
> # default_quota = -1
>
> # number of networks allowed per tenant, and minus means unlimited
> # quota_network = 10
>
> # number of subnets allowed per tenant, and minus means unlimited
> # quota_subnet = 10
>
> # number of ports allowed per tenant, and minus means unlimited
> # quota_port = 50
>
> # number of security groups allowed per tenant, and minus means unlimited
> # quota_security_group = 10
>
> # number of security group rules allowed per tenant, and minus means
> unlimited
> # quota_security_group_rule = 100
>
> # default driver to use for quota checks
> # quota_driver = neutron.db.quota_db.DbQuotaDriver
>
> [agent]
> # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real
> # root filter facility.
> # Change to "sudo" to skip the filtering and just run the comand directly
> root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
>
> # =========== items for agent management extension =============
> # seconds between nodes reporting state to server; should be less than
> # agent_down_time, best if it is half or less than agent_down_time
> # report_interval = 4
>
> # ===========  end of items for agent management extension =====
>
> [keystone_authtoken]
> auth_host = 127.0.0.1
> auth_port = 35357
> auth_protocol = http
> admin_tenant_name = services
> admin_user = neutron
> admin_password = <<removed>>
> signing_dir = $state_path/keystone-signing
> auth_uri=http://127.0.0.1:5000/
>
> [database]
> # This line MUST be changed to actually run the plugin.
> # Example:
> # connection = mysql://root:pass@127.0.0.1:3306/neutron
> # Replace 127.0.0.1 above with the IP address of the database used by the
> # main neutron server. (Leave it as is if the database runs on this host.)
> connection = sqlite:////var/lib/neutron/ovs.sqlite
>
> # The SQLAlchemy connection string used to connect to the slave database
> # slave_connection =
>
> # Database reconnection retry times - in event connectivity is lost
> # set to -1 implies an infinite retry count
> # max_retries = 10
> max_retries = 10
>
> # Database reconnection interval in seconds - if the initial connection to
> the
> # database fails
> # retry_interval = 10
> retry_interval = 10
>
> # Minimum number of SQL connections to keep open in a pool
> # min_pool_size = 1
>
> # Maximum number of SQL connections to keep open in a pool
> # max_pool_size = 10
>
> # Timeout in seconds before idle sql connections are reaped
> # idle_timeout = 3600
> idle_timeout = 3600
>
> # If set, use this value for max_overflow with sqlalchemy
> # max_overflow = 20
>
> # Verbosity of SQL debugging information. 0=None, 100=Everything
> # connection_debug = 0
>
> # Add python stack traces to SQL as comment strings
> # connection_trace = False
>
> # If set, use this value for pool_timeout with sqlalchemy
> # pool_timeout = 10
>
> [service_providers]
> # Specify service providers (drivers) for advanced services like
> loadbalancer, VPN, Firewall.
> # Must be in form:
> # service_provider=<service_type>:<name>:<driver>[:default]
> # List of allowed service type include LOADBALANCER, FIREWALL, VPN
> # Combination of <service type> and <name> must be unique; <driver> must
> also be unique
> # this is multiline option, example for default provider:
> # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default
> # example of non-default provider:
> # service_provider=FIREWALL:name2:firewall_driver_path
> # --- Reference implementations ---
>
> service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
>
> [QUOTAS]
> quota_firewall_rule=-1
> quota_subnet=10
> quota_router=10
> quota_firewall=1
> quota_security_group=10
> quota_network=10
> default_quota=-1
> quota_firewall_policy=1
> quota_security_group_rule=100
> quota_floatingip=50
> quota_port=50
> quota_driver=neutron.db.quota_db.DbQuotaDriver
>
> [AGENT]
> root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
>
>
>
> ==============================================================================================================================================================
> /etc/neutron/plugins/ml2/ml2_conf.ini
>
> ==============================================================================================================================================================
>
> [ml2]
> # (ListOpt) List of network type driver entrypoints to be loaded from
> # the neutron.ml2.type_drivers namespace.
> #
> # type_drivers = local,flat,vlan,gre,vxlan
> type_drivers = vxlan
> # Example: type_drivers = flat,vlan,gre,vxlan
>
> # (ListOpt) Ordered list of network_types to allocate as tenant
> # networks. The default value 'local' is useful for single-box testing
> # but provides no connectivity between hosts.
> #
> # tenant_network_types = local
> tenant_network_types = vxlan
> # Example: tenant_network_types = vlan,gre,vxlan
>
> # (ListOpt) Ordered list of networking mechanism driver entrypoints
> # to be loaded from the neutron.ml2.mechanism_drivers namespace.
> # mechanism_drivers =
> mechanism_drivers =openvswitch,l2population
> # Example: mechanism drivers = openvswitch,mlnx
> # Example: mechanism_drivers = arista
> # Example: mechanism_drivers = cisco,logger
> # Example: mechanism_drivers = openvswitch,brocade
> # Example: mechanism_drivers = linuxbridge,brocade
>
> [ml2_type_flat]
> # (ListOpt) List of physical_network names with which flat networks
> # can be created. Use * to allow flat networks with arbitrary
> # physical_network names.
> #
> # flat_networks =
> # Example:flat_networks = physnet1,physnet2
> # Example:flat_networks = *
>
> [ml2_type_vlan]
> # (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
> # specifying physical_network names usable for VLAN provider and
> # tenant networks, as well as ranges of VLAN tags on each
> # physical_network available for allocation as tenant networks.
> #
> # network_vlan_ranges =
> # Example: network_vlan_ranges = physnet1:1000:2999,physnet2
>
> [ml2_type_gre]
> # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating
> ranges of GRE tunnel IDs that are available for tenant network allocation
> # tunnel_id_ranges =
>
> [ml2_type_vxlan]
> # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
> # ranges of VXLAN VNI IDs that are available for tenant network allocation.
> #
> # vni_ranges =
> vni_ranges =101:1677
>
> # (StrOpt) Multicast group for the VXLAN interface. When configured, will
> # enable sending all broadcast traffic to this multicast group. When left
> # unconfigured, will disable multicast VXLAN mode.
> #
> # vxlan_group =
> vxlan_group =224.0.0.1
> # Example: vxlan_group = 239.1.1.1
>
> [securitygroup]
> firewall_driver=True
>
>
>
> ==============================================================================================================================================================
> /etc/neutron/l3_agent.ini
>
> ==============================================================================================================================================================
>
> [DEFAULT]
> # Show debugging output in log (sets DEBUG log level output)
> # debug = False
> debug = False
>
> # L3 requires that an interface driver be set. Choose the one that best
> # matches your plugin.
> # interface_driver =
> interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
>
> # Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC)
> # that supports L3 agent
> # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
>
> # Use veth for an OVS interface or not.
> # Support kernels with limited namespace support
> # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
> # ovs_use_veth = False
>
> # Example of interface_driver option for LinuxBridge
> # interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
>
> # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and
> # iproute2 package that supports namespaces).
> # use_namespaces = True
> use_namespaces = True
>
> # If use_namespaces is set as False then the agent can only configure one
> router.
>
> # This is done by setting the specific router_id.
> # router_id =
>
> # Each L3 agent can be associated with at most one external network.  This
> # value should be set to the UUID of that external network.  If empty,
> # the agent will enforce that only a single external networks exists and
> # use that external network id
> # gateway_external_network_id =
>
> # Indicates that this L3 agent should also handle routers that do not have
> # an external network gateway configured.  This option should be True only
> # for a single agent in a Neutron deployment, and may be False for all
> agents
> # if all routers must have an external network gateway
> # handle_internal_only_routers = True
> handle_internal_only_routers = True
>
> # Name of bridge used for external network traffic. This should be set to
> # empty value for the linux bridge
> # external_network_bridge = br-ex
> external_network_bridge = br-ex
>
> # TCP Port used by Neutron metadata server
> # metadata_port = 9697
> metadata_port = 9697
>
> # Send this many gratuitous ARPs for HA setup. Set it below or equal to 0
> # to disable this feature.
> # send_arp_for_ha = 3
> send_arp_for_ha = 3
>
> # seconds between re-sync routers' data if needed
> # periodic_interval = 40
> periodic_interval = 40
>
> # seconds to start to sync routers' data after
> # starting agent
> # periodic_fuzzy_delay = 5
> periodic_fuzzy_delay = 5
>
> # enable_metadata_proxy, which is true by default, can be set to False
> # if the Nova metadata server is not available
> # enable_metadata_proxy = True
> enable_metadata_proxy = True
>
> # Location of Metadata Proxy UNIX domain socket
> # metadata_proxy_socket = $state_path/metadata_proxy
>
>
>
>
> 2014-03-27 22:32 GMT+01:00 Sławek Kapłoński <slawek at kaplonski.pl>:
>
>> Hello,
>>
>> I think that You should check what mechanism drivers have You got setup
>> in ml2
>> config file and also check is OVS agent on compute host working correctly.
>>
>> --
>> Best regards
>> Sławek Kapłoński
>>
>> Dnia czwartek, 27 marca 2014 21:09:19 Tom Verdaat pisze:
>>  > Hi all,
>> >
>> > I've been trying to get a multi-host openstack havana deployment to
>> work on
>> > Ubuntu 13.10 with neutron using the ML2 plugin, OVS agent and VXLAN for
>> > tenant networks. Created networks, subnets and routers inside neutron
>> and
>> > accodring to neutron they are all active and up. All ports however are
>> > DOWN when created with the error "binding:vif_type=binding_failed".
>> >
>> > Haven't been able to find useful stuff on this issue or the vif_type
>> > parameter in general online so far. Can anyone tell me what this error
>> is
>> > about and how I can fix this? What might I have done wrong? Where do I
>> > start?
>> >
>> > Thanks a lot!
>> >
>> > Tom
>>
>>  _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140410/87654878/attachment.html>


More information about the Openstack mailing list