<div dir="ltr">Thanks Zho, after tracing log as you specify , its now working.<div><br></div><div><br></div><div>John.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jul 28, 2014 at 6:50 AM, ZHOU TAO A <span dir="ltr"><<a href="mailto:tao.a.zhou@alcatel-lucent.com" target="_blank">tao.a.zhou@alcatel-lucent.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
You can try run neutron-server directly from command line,<br>
or edit /etc/init.d/neutron-server<br>
find line<br>
daemon --user neutron --pidfile $pidfile "$exec
${configs[@]/#/--config-file } --log-file $logfile
&>/dev/null & echo \$! > $pidfile"<br>
<br>
edit ti to something like this<br>
daemon --user neutron --pidfile $pidfile "$exec
${configs[@]/#/--config-file } --log-file $logfile
&>/tmp/yourlog 2>&1 & echo \$!>$pidfile<br>
then run service neutron-server start and check log /tmp/yourlog.<div><div class="h5"><br>
<br>
<div>On 07/27/2014 12:13 PM, john decot
wrote:<br>
</div>
<blockquote type="cite">
No. Doing all stuffs manaully.<br>
<br>
On Sunday, July 27, 2014, Remo Mattei <<a href="mailto:remo@italy1.com" target="_blank">remo@italy1.com</a>>
wrote:<br>
> Did u use packstack?<br>
><br>
> Inviato da iPhone ()<br>
> Il giorno Jul 26, 2014, alle ore 20:34, john decot <<a href="mailto:johndecot@gmail.com" target="_blank">johndecot@gmail.com</a>>
ha scritto:<br>
><br>
> CentOS<br>
><br>
> On Sunday, July 27, 2014, Remo Mattei <<a href="mailto:remo@italy1.com" target="_blank">remo@italy1.com</a>>
wrote:<br>
>> Is this centos/ rh or Ubuntu? <br>
>><br>
>> Inviato da iPhone ()<br>
>> Il giorno Jul 26, 2014, alle ore 19:52, john decot <<a href="mailto:johndecot@gmail.com" target="_blank">johndecot@gmail.com</a>>
ha scritto:<br>
>><br>
>> Neutron.conf and ML2_conf.ini files are as follows:<br>
>> neutron.conf:<br>
>> [DEFAULT]<br>
>> # Print more verbose output (set logging level to INFO
instead of default WARNING level).<br>
>> # verbose = True<br>
>> # Print debugging output (set logging level to DEBUG
instead of default WARNING level).<br>
>> # debug = False<br>
>> # Where to store Neutron state files. This directory
must be writable by the<br>
>> # user executing the agent.<br>
>> # state_path = /var/lib/neutron<br>
>> # Where to store lock files<br>
>> # lock_path = $state_path/lock<br>
>> # log_format = %(asctime)s %(levelname)8s [%(name)s]
%(message)s<br>
>> # log_date_format = %Y-%m-%d %H:%M:%S<br>
>> # use_syslog -> syslog<br>
>> # log_file and log_dir ->
log_dir/log_file<br>
>> # (not log_file) and log_dir ->
log_dir/{binary_name}.log<br>
>> # use_stderr -> stderr<br>
>> # (not user_stderr) and (not log_file) -> stdout<br>
>> # publish_errors -> notification
system<br>
>> # use_syslog = False<br>
>> # syslog_log_facility = LOG_USER<br>
>> # use_stderr = False<br>
>> # log_file =<br>
>> # log_dir =<br>
>> # publish_errors = False<br>
>> # Address to bind the API server to<br>
>> # bind_host = 0.0.0.0<br>
>> # Port the bind the API server to<br>
>> # bind_port = 9696<br>
>> # Path to the extensions. Note that this can be a
colon-separated list of<br>
>> # paths. For example:<br>
>> # api_extensions_path =
extensions:/path/to/more/extensions:/even/more/extensions<br>
>> # The __path__ of neutron.extensions is appended to this,
so if your<br>
>> # extensions are in there you don't need to specify them
here<br>
>> # api_extensions_path =<br>
>> # (StrOpt) Neutron core plugin entrypoint to be loaded
from the<br>
>> # neutron.core_plugins namespace. See setup.cfg for the
entrypoint names of the<br>
>> # plugins included in the neutron source distribution.
For compatibility with<br>
>> # previous versions, the class name of a plugin can be
specified instead of its<br>
>> # entrypoint name.<br>
>> #<br>
>> # core_plugin =<br>
>> # Example: core_plugin = ml2<br>
>> core_plugin = ml2<br>
>> # (ListOpt) List of service plugin entrypoints to be
loaded from the<br>
>> # neutron.service_plugins namespace. See setup.cfg for
the entrypoint names of<br>
>> # the plugins included in the neutron source
distribution. For compatibility<br>
>> # with previous versions, the class name of a plugin can
be specified instead<br>
>> # of its entrypoint name.<br>
>> #<br>
>> # service_plugins =<br>
>> # Example: service_plugins =
router,firewall,lbaas,vpnaas,metering<br>
>> service_plugins = router<br>
>> # Paste configuration file<br>
>> # api_paste_config = /usr/share/neutron/api-paste.ini<br>
>> # The strategy to be used for auth.<br>
>> # Supported values are 'keystone'(default), 'noauth'.<br>
>> # auth_strategy = noauth<br>
>> auth_strategy = keystone<br>
>> # Base MAC address. The first 3 octets will remain
unchanged. If the<br>
>> # 4h octet is not 00, it will also be used. The others
will be<br>
>> # randomly generated.<br>
>> # 3 octet<br>
>> # base_mac = fa:16:3e:00:00:00<br>
>> # 4 octet<br>
>> # base_mac = fa:16:3e:4f:00:00<br>
>> # Maximum amount of retries to generate a unique MAC
address<br>
>> # mac_generation_retries = 16<br>
>> # DHCP Lease duration (in seconds)<br>
>> # dhcp_lease_duration = 86400<br>
>> # Allow sending resource operation notification to DHCP
agent<br>
>> dhcp_agent_notification = True<br>
>> # Enable or disable bulk create/update/delete operations<br>
>> # allow_bulk = True<br>
>> # Enable or disable pagination<br>
>> # allow_pagination = False<br>
>> # Enable or disable sorting<br>
>> # allow_sorting = False<br>
>> # Enable or disable overlapping IPs for subnets<br>
>> # Attention: the following parameter MUST be set to False
if Neutron is<br>
>> # being used in conjunction with nova security groups<br>
>> allow_overlapping_ips = True<br>
>> # Ensure that configured gateway is on subnet<br>
>> # force_gateway_on_subnet = False<br>
>><br>
>> # RPC configuration options. Defined in rpc __init__<br>
>> # The messaging module to use, defaults to kombu.<br>
>> rpc_backend = neutron.openstack.common.rpc.impl_kombu<br>
>> # Size of RPC thread pool<br>
>> # rpc_thread_pool_size = 64<br>
>> # Size of RPC connection pool<br>
>> # rpc_conn_pool_size = 30<br>
>> # Seconds to wait for a response from call or multicall<br>
>> # rpc_response_timeout = 60<br>
>> # Seconds to wait before a cast expires (TTL). Only
supported by impl_zmq.<br>
>> # rpc_cast_timeout = 30<br>
>> # Modules of exceptions that are permitted to be
recreated<br>
>> # upon receiving exception data from an rpc call.<br>
>> # allowed_rpc_exception_modules =
neutron.openstack.common.exception, nova.exception<br>
>> # AMQP exchange to connect to if using RabbitMQ or QPID<br>
>> control_exchange = neutron<br>
>> # If passed, use a fake RabbitMQ provider<br>
>> # fake_rabbit = False<br>
>> # Configuration options if sending notifications via
kombu rpc (these are<br>
>> # the defaults)<br>
>> # SSL version to use (valid only if SSL enabled)<br>
>> # kombu_ssl_version =<br>
>> # SSL key file (valid only if SSL enabled)<br>
>> # kombu_ssl_keyfile =<br>
>> # SSL cert file (valid only if SSL enabled)<br>
>> # kombu_ssl_certfile =<br>
>> # SSL certification authority file (valid only if SSL
enabled)<br>
>> # kombu_ssl_ca_certs =<br>
>> # IP address of the RabbitMQ installation<br>
>> rabbit_host = localhost<br>
>> # Password of the RabbitMQ server<br>
>> rabbit_password = xxxxxx<br>
>> # Port where RabbitMQ server is running/listening<br>
>> rabbit_port = 5672<br>
>> # RabbitMQ single or HA cluster (host:port pairs i.e:
host1:5672, host2:5672)<br>
>> # rabbit_hosts is defaulted to
'$rabbit_host:$rabbit_port'<br>
>> # rabbit_hosts = localhost:5672<br>
>> # User ID used for RabbitMQ connections<br>
>> rabbit_userid = guest<br>
>> # Location of a virtual RabbitMQ installation.<br>
>> # rabbit_virtual_host = /<br>
>> # Maximum retries with trying to connect to RabbitMQ<br>
>> # (the default of 0 implies an infinite retry count)<br>
>> # rabbit_max_retries = 0<br>
>> # RabbitMQ connection retry interval<br>
>> # rabbit_retry_interval = 1<br>
>> # Use HA queues in RabbitMQ (x-ha-policy: all). You need
to<br>
>> # wipe RabbitMQ database when changing this option.
(boolean value)<br>
>> # rabbit_ha_queues = false<br>
>> # QPID<br>
>> # rpc_backend=neutron.openstack.common.rpc.impl_qpid<br>
>> # Qpid broker hostname<br>
>> # qpid_hostname = localhost<br>
>> # Qpid broker port<br>
>> # qpid_port = 5672<br>
>> # Qpid single or HA cluster (host:port pairs i.e:
host1:5672, host2:5672)<br>
>> # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'<br>
>> # qpid_hosts = localhost:5672<br>
>> # Username for qpid connection<br>
>> # qpid_username = ''<br>
>> # Password for qpid connection<br>
>> # qpid_password = ''<br>
>> # Space separated list of SASL mechanisms to use for auth<br>
>> # qpid_sasl_mechanisms = ''<br>
>> # Seconds between connection keepalive heartbeats<br>
>> # qpid_heartbeat = 60<br>
>> # Transport to use, either 'tcp' or 'ssl'<br>
>> # qpid_protocol = tcp<br>
>> # Disable Nagle algorithm<br>
>> # qpid_tcp_nodelay = True<br>
>> # ZMQ<br>
>> # rpc_backend=neutron.openstack.common.rpc.impl_zmq<br>
>> # ZeroMQ bind address. Should be a wildcard (*), an
ethernet interface, or IP.<br>
>> # The "host" option should point or resolve to this
address.<br>
>> # rpc_zmq_bind_address = *<br>
>> # ============ Notification System Options
=====================<br>
>> # Notifications can be sent when network/subnet/port are
created, updated or deleted.<br>
>> # There are three methods of sending notifications:
logging (via the<br>
>> # log_file directive), rpc (via a message queue) and<br>
>> # noop (no notifications sent, the default)<br>
>> # Notification_driver can be defined multiple times<br>
>> # Do nothing driver<br>
>> # notification_driver =
neutron.openstack.common.notifier.no_op_notifier<br>
>> # Logging driver<br>
>> # notification_driver =
neutron.openstack.common.notifier.log_notifier<br>
>> # RPC driver.<br>
>> # notification_driver =
neutron.openstack.common.notifier.rpc_notifier<br>
>> # default_notification_level is used to form actual topic
name(s) or to set logging level<br>
>> # default_notification_level = INFO<br>
>> # default_publisher_id is a part of the notification
payload<br>
>> # host = <a href="http://myhost.com" target="_blank">myhost.com</a><br>
>> # default_publisher_id = $host<br>
>> # Defined in rpc_notifier, can be comma separated values.<br>
>> # The actual topic names will be
%s.%(default_notification_level)s<br>
>> # notification_topics = notifications<br>
>> # Default maximum number of items returned in a single
response,<br>
>> # value == infinite and value < 0 means no max limit,
and value must<br>
>> # be greater than 0. If the number of items requested is
greater than<br>
>> # pagination_max_limit, server will just return
pagination_max_limit<br>
>> # of number of items.<br>
>> # pagination_max_limit = -1<br>
>> # Maximum number of DNS nameservers per subnet<br>
>> # max_dns_nameservers = 5<br>
>> # Maximum number of host routes per subnet<br>
>> # max_subnet_host_routes = 20<br>
>> # Maximum number of fixed ips per port<br>
>> # max_fixed_ips_per_port = 5<br>
>> # =========== items for agent management extension
=============<br>
>> # Seconds to regard the agent as down; should be at least
twice<br>
>> # report_interval, to be sure the agent is down for good<br>
>> # agent_down_time = 75<br>
>> # =========== end of items for agent management
extension =====<br>
>> # =========== items for agent scheduler extension
=============<br>
>> # Driver to use for scheduling network to DHCP agent<br>
>> # network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler<br>
>> # Driver to use for scheduling router to a default L3
agent<br>
>> # router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler<br>
>> # Driver to use for scheduling a loadbalancer pool to an
lbaas agent<br>
>> # loadbalancer_pool_scheduler_driver =
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler<br>
>> # Allow auto scheduling networks to DHCP agent. It will
schedule non-hosted<br>
>> # networks to first DHCP agent which sends
get_active_networks message to<br>
>> # neutron server<br>
>> # network_auto_schedule = True<br>
>> # Allow auto scheduling routers to L3 agent. It will
schedule non-hosted<br>
>> # routers to first L3 agent which sends sync_routers
message to neutron server<br>
>> # router_auto_schedule = True<br>
>> # Number of DHCP agents scheduled to host a network. This
enables redundant<br>
>> # DHCP agents for configured networks.<br>
>> # dhcp_agents_per_network = 1<br>
>> # =========== end of items for agent scheduler extension
=====<br>
>> # =========== WSGI parameters related to the API server
==============<br>
>> # Number of separate worker processes to spawn. The
default, 0, runs the<br>
>> # worker thread in the current process. Greater than 0
launches that number of<br>
>> # child processes as workers. The parent process manages
them.<br>
>> # api_workers = 0<br>
>> # Number of separate RPC worker processes to spawn. The
default, 0, runs the<br>
>> # worker thread in the current process. Greater than 0
launches that number of<br>
>> # child processes as RPC workers. The parent process
manages them.<br>
>> # This feature is experimental until issues are addressed
and testing has been<br>
>> # enabled for various plugins for compatibility.<br>
>> # rpc_workers = 0<br>
>> # Sets the value of TCP_KEEPIDLE in seconds to use for
each server socket when<br>
>> # starting API server. Not supported on OS X.<br>
>> # tcp_keepidle = 600<br>
>> # Number of seconds to keep retrying to listen<br>
>> # retry_until_window = 30<br>
>> # Number of backlog requests to configure the socket
with.<br>
>> # backlog = 4096<br>
>> # Max header line to accommodate large tokens<br>
>> # max_header_line = 16384<br>
>> # Enable SSL on the API server<br>
>> # use_ssl = False<br>
>> # Certificate file to use when starting API server
securely<br>
>> # ssl_cert_file = /path/to/certfile<br>
>> # Private key file to use when starting API server
securely<br>
>> # ssl_key_file = /path/to/keyfile<br>
>> # CA certificate file to use when starting API server
securely to<br>
>> # verify connecting clients. This is an optional
parameter only required if<br>
>> # API clients need to authenticate to the API server
using SSL certificates<br>
>> # signed by a trusted CA<br>
>> # ssl_ca_file = /path/to/cafile<br>
>> # ======== end of WSGI parameters related to the API
server ==========<br>
>><br>
>> # ======== neutron nova interactions ==========<br>
>> # Send notification to nova when port status is active.<br>
>> notify_nova_on_port_status_changes = True<br>
>> # Send notifications to nova when port data
(fixed_ips/floatingips) change<br>
>> # so nova can update it's cache.<br>
>> notify_nova_on_port_data_changes = True<br>
>> # URL for connection to nova (Only supports one nova
region currently).<br>
>> nova_url = <a href="http://127.0.0.1:8774/v2" target="_blank">http://127.0.0.1:8774/v2</a><br>
>> # Name of nova region to use. Useful if keystone manages
more than one region<br>
>> # nova_region_name =<br>
>> nova_region_name = RegionOne<br>
>> # Username for connection to nova in admin context<br>
>> nova_admin_username = nova<br>
>> # The uuid of the admin nova tenant<br>
>> # nova_admin_tenant_id =<br>
>> nova_admin_tenant_id = 8b83c2b6d22f4dfd82e15b105ea10ae3<br>
>> # Password for connection to nova in admin context.<br>
>> nova_admin_password = xxxxxx<br>
>> # Authorization URL for connection to nova in admin
context.<br>
>> # nova_admin_auth_url =<br>
>> nova_admin_auth_url = <a href="http://localhost:35357/v2.0" target="_blank">http://localhost:35357/v2.0</a><br>
>> # Number of seconds between sending events to nova if
there are any events to send<br>
>> # send_events_interval = 2<br>
>> # ======== end of neutron nova interactions ==========<br>
>> [quotas]<br>
>> # Default driver to use for quota checks<br>
>> # quota_driver = neutron.db.quota_db.DbQuotaDriver<br>
>> # Resource name(s) that are supported in quota features<br>
>> # quota_items = network,subnet,port<br>
>> # Default number of resource allowed per tenant. A
negative value means<br>
>> # unlimited.<br>
>> # default_quota = -1<br>
>> # Number of networks allowed per tenant. A negative value
means unlimited.<br>
>> # quota_network = 10<br>
>> # Number of subnets allowed per tenant. A negative value
means unlimited.<br>
>> # quota_subnet = 10<br>
>> # Number of ports allowed per tenant. A negative value
means unlimited.<br>
>> # quota_port = 50<br>
>> # Number of security groups allowed per tenant. A
negative value means<br>
>> # unlimited.<br>
>> # quota_security_group = 10<br>
>> # Number of security group rules allowed per tenant. A
negative value means<br>
>> # unlimited.<br>
>> # quota_security_group_rule = 100<br>
>> # Number of vips allowed per tenant. A negative value
means unlimited.<br>
>> # quota_vip = 10<br>
>> # Number of pools allowed per tenant. A negative value
means unlimited.<br>
>> # quota_pool = 10<br>
>> # Number of pool members allowed per tenant. A negative
value means unlimited.<br>
>> # The default is unlimited because a member is not a real
resource consumer<br>
>> # on Openstack. However, on back-end, a member is a
resource consumer<br>
>> # and that is the reason why quota is possible.<br>
>> # quota_member = -1<br>
>> # Number of health monitors allowed per tenant. A
negative value means<br>
>> # unlimited.<br>
>> # The default is unlimited because a health monitor is
not a real resource<br>
>> # consumer on Openstack. However, on back-end, a member
is a resource consumer<br>
>> # and that is the reason why quota is possible.<br>
>> # quota_health_monitors = -1<br>
>> # Number of routers allowed per tenant. A negative value
means unlimited.<br>
>> # quota_router = 10<br>
>> # Number of floating IPs allowed per tenant. A negative
value means unlimited.<br>
>> # quota_floatingip = 50<br>
>> [agent]<br>
>> # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
to use the real<br>
>> # root filter facility.<br>
>> # Change to "sudo" to skip the filtering and just run the
comand directly<br>
>> root_helper = sudo neutron-rootwrap
/etc/neutron/rootwrap.conf<br>
>> # =========== items for agent management extension
=============<br>
>> # seconds between nodes reporting state to server; should
be less than<br>
>> # agent_down_time, best if it is half or less than
agent_down_time<br>
>> # report_interval = 30<br>
>> # =========== end of items for agent management
extension =====<br>
>> [keystone_authtoken]<br>
>> auth_uri = <a href="http://192.168.168.100:5000" target="_blank">http://192.168.168.100:5000</a><br>
>> auth_host = 127.0.0.1<br>
>> auth_port = 35357<br>
>> auth_protocol = http<br>
>> admin_tenant_name = service<br>
>> admin_user = neutron<br>
>> admin_password = xxxxxx<br>
>> signing_dir = $state_path/keystone-signing<br>
>> [database]<br>
>> # This line MUST be changed to actually run the plugin.<br>
>> # Example:<br>
>> connection = mysql://<a href="http://neutron:xxxxxx@127.0.0.1/neutron_ml2" target="_blank">neutron:xxxxxx@127.0.0.1/neutron_ml2</a><br>
>> # Replace 127.0.0.1 above with the IP address of the
database used by the<br>
>> # main neutron server. (Leave it as is if the database
runs on this host.)<br>
>> # connection = sqlite://<br>
>> # The SQLAlchemy connection string used to connect to the
slave database<br>
>> # slave_connection =<br>
>> # Database reconnection retry times - in event
connectivity is lost<br>
>> # set to -1 implies an infinite retry count<br>
>> # max_retries = 10<br>
>> # Database reconnection interval in seconds - if the
initial connection to the<br>
>> # database fails<br>
>> # retry_interval = 10<br>
>> # Minimum number of SQL connections to keep open in a
pool<br>
>> # min_pool_size = 1<br>
>> # Maximum number of SQL connections to keep open in a
pool<br>
>> # max_pool_size = 10<br>
>> # Timeout in seconds before idle sql connections are
reaped<br>
>> # idle_timeout = 3600<br>
>> # If set, use this value for max_overflow with sqlalchemy<br>
>> # max_overflow = 20<br>
>> # Verbosity of SQL debugging information. 0=None,
100=Everything<br>
>> # connection_debug = 0<br>
>> # Add python stack traces to SQL as comment strings<br>
>> # connection_trace = False<br>
>> # If set, use this value for pool_timeout with sqlalchemy<br>
>> # pool_timeout = 10<br>
>> [service_providers]<br>
>> # Specify service providers (drivers) for advanced
services like loadbalancer, VPN, Firewall.<br>
>> # Must be in form:<br>
>> #
service_provider=<service_type>:<name>:<driver>[:default]<br>
>> # List of allowed service types includes LOADBALANCER,
FIREWALL, VPN<br>
>> # Combination of <service type> and <name>
must be unique; <driver> must also be unique<br>
>> # This is multiline option, example for default provider:<br>
>> #
service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default<br>
>> # example of non-default provider:<br>
>> # service_provider=FIREWALL:name2:firewall_driver_path<br>
>> # --- Reference implementations ---<br>
>> # service_provider =
LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default<br>
>>
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default<br>
>> # In order to activate Radware's lbaas driver you need to
uncomment the next line.<br>
>> # If you want to keep the HA Proxy as the default lbaas
driver, remove the attribute default from the line below.<br>
>> # Otherwise comment the HA Proxy line<br>
>> # service_provider =
LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default<br>
>> # uncomment the following line to make the 'netscaler'
LBaaS provider available.<br>
>> #
service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver<br>
>> # Uncomment the following line (and comment out the
OpenSwan VPN line) to enable Cisco's VPN driver.<br>
>> #
service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default<br>
>> # Uncomment the line below to use Embrane heleos as Load
Balancer service provider.<br>
>> #
service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default<br>
>><br>
>><br>
>> and ml2_conf.ini:<br>
>> [ml2]<br>
>> # (ListOpt) List of network type driver entrypoints to be
loaded from<br>
>> # the neutron.ml2.type_drivers namespace.<br>
>> #<br>
>> # type_drivers = local,flat,vlan,gre,vxlan<br>
>> # Example: type_drivers = flat,vlan,gre,vxlan<br>
>> type_drivers = flat,vlan,gre<br>
>> tenant_network_types = vlan,gre<br>
>> mechanism_drivers = openvswitch<br>
>> # (ListOpt) Ordered list of network_types to allocate as
tenant<br>
>> # networks. The default value 'local' is useful for
single-box testing<br>
>> # but provides no connectivity between hosts.<br>
>> #<br>
>> # tenant_network_types = local<br>
>> # Example: tenant_network_types = vlan,gre,vxlan<br>
>> # (ListOpt) Ordered list of networking mechanism driver
entrypoints<br>
>> # to be loaded from the neutron.ml2.mechanism_drivers
namespace.<br>
>> # mechanism_drivers =<br>
>> # Example: mechanism drivers = openvswitch,mlnx<br>
>> # Example: mechanism_drivers = arista<br>
>> # Example: mechanism_drivers = cisco,logger<br>
>> # Example: mechanism_drivers = openvswitch,brocade<br>
>> # Example: mechanism_drivers = linuxbridge,brocade<br>
>> [ml2_type_flat]<br>
>> # (ListOpt) List of physical_network names with which
flat networks<br>
>> # can be created. Use * to allow flat networks with
arbitrary<br>
>> # physical_network names.<br>
>> #<br>
>> # flat_networks =<br>
>> # Example:flat_networks = physnet1,physnet2<br>
>> # Example:flat_networks = *<br>
>> [ml2_type_vlan]<br>
>> # (ListOpt) List of
<physical_network>[:<vlan_min>:<vlan_max>]
tuples<br>
>> # specifying physical_network names usable for VLAN
provider and<br>
>> # tenant networks, as well as ranges of VLAN tags on each<br>
>> # physical_network available for allocation as tenant
networks.<br>
>> #<br>
>> # network_vlan_ranges =<br>
>> # Example: network_vlan_ranges =
physnet1:1000:2999,physnet2<br>
>> [ml2_type_gre]<br>
>> # (ListOpt) Comma-separated list of
<tun_min>:<tun_max> tuples enumerating ranges of GRE
tunnel IDs that are available for tenant network allocation<br>
>> # tunnel_id_ranges =<br>
>> tunnel_id_ranges = 1:1000<br>
>> [ml2_type_vxlan]<br>
>> # (ListOpt) Comma-separated list of
<vni_min>:<vni_max> tuples enumerating<br>
>> # ranges of VXLAN VNI IDs that are available for tenant
network allocation.<br>
>> #<br>
>> # vni_ranges =<br>
>> # (StrOpt) Multicast group for the VXLAN interface. When
configured, will<br>
>> # enable sending all broadcast traffic to this multicast
group. When left<br>
>> # unconfigured, will disable multicast VXLAN mode.<br>
>> #<br>
>> # vxlan_group =<br>
>> # Example: vxlan_group = 239.1.1.1<br>
>> [securitygroup]<br>
>> # Controls if neutron security group is enabled or not.<br>
>> # It should be false when you use nova security group.<br>
>> firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br>
>> enable_security_group = True<br>
>><br>
>> Rgds,<br>
>> John<br>
>><br>
>> On Sun, Jul 27, 2014 at 8:22 AM, john decot <<a href="mailto:johndecot@gmail.com" target="_blank">johndecot@gmail.com</a>>
wrote:<br>
>>><br>
>>> Hi, <br>
>>> <br>
>>> the is no upstart file in log
directory. <br>
>>><br>
>>> Regards,<br>
>>> John.<br>
>>><br>
>>> On Sun, Jul 27, 2014 at 8:14 AM, Gangur, Hrushikesh
(R & D HP Cloud) <<a href="mailto:hrushikesh.gangur@hp.com" target="_blank">hrushikesh.gangur@hp.com</a>>
wrote:<br>
>>>><br>
>>>> Do you see any neutron logs getting created in
/var/log/upstart<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> From: john decot [mailto:<a href="mailto:johndecot@gmail.com" target="_blank">johndecot@gmail.com</a>]<br>
>>>> Sent: Saturday, July 26, 2014 6:14 AM<br>
>>>> To: <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>>>> Subject: [Openstack] neutron-server cannot start<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> Hi,<br>
>>>><br>
>>>> I am new to openstack. I am trying to bring
neutron-server up but try to fail.<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> output of /etc/init.d/neutron-server status <br>
>>>><br>
>>>> <br>
>>>><br>
>>>> is <br>
>>>><br>
>>>> neutron dead but pid file exists<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> tail -f /var/log/messages is<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> detected unhandled Python exception in
'/usr/bin/neutron-server'<br>
>>>><br>
>>>> Not saving repeating crash in
'/usr/bin/neutron-server'<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> <br>
>>>><br>
>>>> direcotry /var/log/neutron is empty<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> Any help is appreciated.<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> Thanking You,<br>
>>>><br>
>>>> <br>
>>>><br>
>>>> Regards,<br>
>>>><br>
>>>> John<br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>> Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>><br>
>><br>
>> !DSPAM:1,53d46cbd143596451224348!<br>
>> !DSPAM:1,53d47657156749327115755!
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</pre>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></blockquote></div><br></div>