[openstack-dev] Not able to launch based VM due to nova-network service.

Matt Riedemann mriedem at linux.vnet.ibm.com
Mon Oct 26 15:27:24 UTC 2015



On 10/24/2015 10:26 AM, Rahul Arora wrote:
> Hi Team,
>
> I am working on ICEHOUSE release of Openstack.*I am able to launch VM
> using KVM/QEMU utility.*
>
> Now i *want to launch LXC based VM* using the same release.
>
> But I am getting below error while launching LXC based VM.These errors
> are in *nova-neutron service.*
>
> self._get_networks_by_uuids(context, network_uuids)\n', '  File
> "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1824,
> in _get_networks_by_uuids\n    context, network_uuids,
> project_only=True)\n', '  File
> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 110, in
> wrapper\n    args, kwargs)\n', '  File
> "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 425,
> in object_class_action\n    objver=objver, args=args, kwargs=kwargs)\n',
> '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
> line 150, in call\n    wait_for_reply=True, timeout=timeout)\n', '  File
> "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90,
> in _send\n    timeout=timeout)\n', '  File
> "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line
> 412, in send\n    return self._send(target, ctxt, message,
> wait_for_reply, timeout)\n', '  File
> "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line
> 405, in _send\n    raise result\n', 'NoNetworksFound_Remote: No networks
> defined.\nTraceback (most recent call last):\n\n  File
> "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 597,
> in _object_dispatch\n    return getattr(target, method)(context, *args,
> **kwargs)\n\n  File
> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 112, in
> wrapper\n    result = fn(cls, context, *args, **kwargs)\n\n  File
> "/usr/lib/python2.7/site-packages/nova/objects/network.py", line 183, in
> get_by_uuids\n    project_only)\n\n  File
> "/usr/lib/python2.7/site-packages/nova/db/api.py", line 965, in
> network_get_all_by_uuids\n    project_only=project_only)\n\n  File
> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164,
> in wrapper\n    return f(*args, **kwargs)\n\n  File
> "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2582,
> in network_get_all_by_uuids\n   raise
> exception.NoNetworksFound()\n\nNoNetworksFound: No networks defined.\n\n']
>
> *Details of error.*
> =============
>
> When i am creating network for the LXC VM.I am able to see the same
> network using *neutron net-list* command,But i think this same network
> nova is not able to access that's why i do *nova net-list* it shows
> nothing(no network).Due to this,above error is coming.
>
> Below are my nova and neutron configuration files.
>
> *1. NOVA CONF*
>
> [DEFAULT]
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
> compute_driver = libvirt.LibvirtDriver
> libvirt_cpu_mode = host-model
> default_floating_pool = public
> fixed_range =
> force_dhcp_release = True
> dhcpbridge_flagfile = /etc/nova/nova.conf
> compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
> rootwrap_config = /etc/nova/rootwrap.conf
> api_paste_config = /etc/nova/api-paste.ini
> allow_resize_to_same_host = true
> auth_strategy = keystone
> instances_path = /etc/nova/instances
> debug = False
> verbose = True
> my_ip = 192.168.2.99
> glance_host = 192.168.2.99
> lock_path = /var/lock/nova/
> libvirt_images_type = default
>
> [libvirt]
> virt_type = lxc
>
>
> vnc_enabled = False
> vncserver_listen =
> novncproxy_base_url = http://:6080/vnc_auto.html
> vncserver_proxyclient_address =
>
> flat_interface = eth0
> flat_network_bridge = br1
> vlan_interface = eth0
> public_interface = br1
> network_manager = nova.network.manager.FlatDHCPManager
> fixed_range =
> force_dhcp_release = False
> dhcpbridge = /usr/bin/nova-dhcpbridge
>
> sql_connection = mysql://nova:NOVA_DBPASS@192.168.2.99/nova
> <http://nova:NOVA_DBPASS@192.168.2.99/nova>
>
> rpc_backend = rabbit
> rabbit_host = 192.168.2.99
> rabbit_port = 5672
>
> neutron_url = http://192.168.2.99:9696
> network_api_class = nova.network.neutronv2.api.API
> security_group_api = neutron
> neutron_auth_strategy = keystone
> neutron_admin_tenant_name = service
> neutron_admin_username = neutron
> neutron_admin_password = NEUTRON_PASS
> neutron_admin_auth_url = http://192.168.2.99:35357/v2.0/
> linuxnet_interface_driver = nova.network.NoopFirewallDriver
>
> vif_plugging_timeout = 10
> vif_plugging_is_fatal = False
>
> instance_usage_audit=True
> instance_usage_audit_period = hour
> notify_on_state_change = vm_and_task_state
> notification_driver = nova.openstack.common.notifier.rpc_notifier
>
> libvirt_images_rbd_pool = cinder-volumes
> libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_user = cinder-volume
>
> service_neutron_metadata_proxy = true
> neutron_metadata_proxy_shared_secret = ADMIN_PASS
>
> [spice]
> agent_enabled = True
> enabled = True
> html5proxy_base_url = http://:6082/spice_auto.html
> keymap = en-us
> server_listen =
> server_proxyclient_address =
>
> auth_strategy = keystone
>
> [keystone_authtoken]
> auth_uri = http://192.168.2.99:5000
> auth_host = 192.168.2.99
> auth_port = 35357
> auth_protocol = http
> admin_tenant_name = service
> admin_user = nova
> admin_password = NOVA_PASS
>
>
> *2. NEUTRON CONF*
>
> [DEFAULT]
> state_path = /var/lib/neutron
> lock_path = $state_path/lock
> core_plugin =
> neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
> api_paste_config = api-paste.ini
> auth_strategy = keystone
> rpc_backend = neutron.openstack.common.rpc.impl_kombu
> rabbit_host = icehouse-controller
> notification_driver = neutron.openstack.common.notifier.rpc_notifier
> notify_nova_on_port_status_changes = True
> notify_nova_on_port_data_changes = True
> nova_url = http://icehouse-controller:8774/v2
> nova_admin_username = nova
> nova_admin_tenant_id = SERVICE_TENANT_ID
> nova_admin_password = NOVA_PASS
> nova_admin_auth_url = http://icehouse-controller:35357/v2.0
> [quotas]
> [agent]
> [keystone_authtoken]
> auth_uri = http://icehouse-controller:5000
> auth_host = icehouse-controller
> auth_port = 35357
> auth_protocol = http
> admin_tenant_name = service
> admin_user = neutron
> admin_password = NEUTRON_PASS
> [database]
> connection = mysql://neutron:NEUTRON_DBPASS@icehouse-controller/neutron
> [service_providers]
> service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
> service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
>
> =====================
>
>
> Please help me on this.I am not able to solve this issue.
>
> Thanks
>
> ..
>
> Regards
>
> Rahul Arora
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

This is not an appropriate question for the openstack-dev mailing list, 
which is for development discussion only. Icehouse is no longer 
supported upstream. If you're looking for other support, it should be in 
the more generic #openstack mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Or the forum:

https://ask.openstack.org/en/questions/

-- 

Thanks,

Matt Riedemann




More information about the OpenStack-dev mailing list