[Openstack] Nova-compute is not running using devstack

Andreas Scheuring scheuran at linux.vnet.ibm.com
Fri Oct 9 07:36:28 UTC 2015


Hi, 
try to update you nova.conf like this described here [1]



[1]
https://ask.openstack.org/en/question/67340/starting-nova-compute-fails-with-missing-cpu-model-name-error/



-- 
Andreas
(IRC: scheuran)



On Fr, 2015-10-09 at 11:40 +0530, sahil arora wrote:
> The nova-compute service is not running currently on the system hence
> the availability zones are not being shown . Below are the error logs
> when I try to run it manually :
> 
> Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line
> 457, in fire_timers timer() File
> "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line
> 58, in __call__ cb(args, *kw) File
> "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 168,
> in _do_send waiter.switch(result) File
> "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line
> 214, in main result = function(args, *kwargs) File
> "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line
> 645, in run_service service.start() File
> "/opt/stack/nova/nova/service.py", line 164, in start
> self.manager.init_host() File
> "/opt/stack/nova/nova/compute/manager.py", line 1297, in init_host
> self.driver.init_host(host=self.host) File
> "/opt/stack/nova/nova/virt/libvirt/driver.py", line 562, in init_host
> self._do_quality_warnings() File
> "/opt/stack/nova/nova/virt/libvirt/driver.py", line 540, in
> _do_quality_warnings caps = self._host.get_capabilities() File
> "/opt/stack/nova/nova/virt/libvirt/host.py", line 773, in
> get_capabilities libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
> File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line
> 183, in doit result = proxy_call(self._autowrap, f, args, *kwargs)
> File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line
> 141, in proxy_call rv = execute(f, args, *kwargs) File
> "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122,
> in execute six.reraise(c, e, tb) File
> "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80,
> in tworker rv = meth(args, *kwargs) File
> "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 3153, in
> baselineCPU if ret is None: raise libvirtError
> ('virConnectBaselineCPU() failed', conn=self) libvirtError: XML error:
> Missing CPU model name
> 
> output of kvm-ok and version of libvirtd stack at ubuntu:~/devstack$
> sudo /usr/sbin/kvm-ok INFO: Your CPU does not support KVM extensions
> KVM acceleration can NOT be used stack at ubuntu:~/devstack$ libvirtd –
> version libvirtd (libvirt) 1.2.2
> 
> output of /proc/cpuinfo
> 
> stack at ubuntu:~/devstack$ cat /proc/cpuinfo processor : 0 vendor_id :
> GenuineIntel cpu family : 6 model : 13 model name : QEMU Virtual CPU
> version (cpu64-rhel6) stepping : 3 microcode : 0x1 cpu MHz : 1799.999
> cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu
> cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes
> cpuid level : 4 wp : yes flags : fpu de pse tsc msr pae mce cx8 apic
> mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall lm nopl pni
> cx16 hypervisor lahf_lm bugs : bogomips : 3599.99 clflush size : 64
> cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual
> power management:
> 
> processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 13 model
> name : QEMU Virtual CPU version (cpu64-rhel6) stepping : 3 microcode :
> 0x1 cpu MHz : 1799.999 cache size : 4096 KB physical id : 1 siblings :
> 1 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes
> fpu_exception : yes cpuid level : 4 wp : yes flags : fpu de pse tsc
> msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2
> syscall lm nopl pni cx16 hypervisor lahf_lm bugs : bogomips : 3599.99
> clflush size : 64 cache_alignment : 64 address sizes : 46 bits
> physical, 48 bits virtual
> 
> 
> apart from this i used libvirt as qemu 
> 
> 
> On Thu, Oct 8, 2015 at 5:30 PM,
> <openstack-request at lists.openstack.org> wrote:
>         Send Openstack mailing list submissions to
>                 openstack at lists.openstack.org
>         
>         To subscribe or unsubscribe via the World Wide Web, visit
>         
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         or, via email, send a message with subject or body 'help' to
>                 openstack-request at lists.openstack.org
>         
>         You can reach the person managing the list at
>                 openstack-owner at lists.openstack.org
>         
>         When replying, please edit your Subject line so it is more
>         specific
>         than "Re: Contents of Openstack digest..."
>         
>         
>         Today's Topics:
>         
>            1. Unable to associate floating ip into instance using
>            Nova
>               Cells and nova-network (Bruno Grazioli)
>            2. Re: OpenStack (Devstack) and Opendaylight (Silvia
>         Fichera)
>            3. Re: LBaaS & VPNaaS (James Denton)
>            4. [OSSA 2015-021] Nova network security group changes are
>         not
>               applied to running instances (CVE-2015-7713) (Tristan
>         Cacqueray)
>            5. Re: Cinder - Ceph RADOS with libvirt+Xen Project (Adam
>         Lawson)
>            6. problem with nova-docker and neutron (Reza Bakhshayeshi)
>            7. Re: problem with nova-docker and neutron (Nasir Mahmood)
>            8. [OpenStack] Sahara fails to launch hadoop cluster
>               (varun bhatnagar)
>            9. [Neutron][Heat] Liberty RC2 available (Thierry Carrez)
>           10. Re: neutron net-list empty (Somanchi Trinath)
>           11. Re: [OpenStack] Sahara fails to launch hadoop cluster
>               (Sergey Reshetnyak)
>           12. Re: LBaaS & VPNaaS (Yngvi P?ll ?orfinnsson)
>           13. [Horizon] Liberty RC2 available (Thierry Carrez)
>           14. Re: [OpenStack] Sahara fails to launch hadoop cluster
>               (varun bhatnagar)
>           15. Re: Cinder vs Swift architecture (saurabh suman)
>           16. Re: LBaaS & VPNaaS (Paul Michali)
>           17. Mac Address Question (Georgios Dimitrakakis)
>         
>         
>         ----------------------------------------------------------------------
>         
>         Message: 1
>         Date: Wed, 7 Oct 2015 15:43:53 +0200
>         From: Bruno Grazioli <bruno.graziol at gmail.com>
>         To: openstack at lists.openstack.org
>         Subject: [Openstack] Unable to associate floating ip into
>         instance
>                 using   Nova Cells and nova-network
>         Message-ID:
>         
>         <CAGOjcFboVeNH=Ly2dukt5RYQrCpGiqOk=anE9G0fW9qYdopHNA at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi,
>         
>         We've a Liberty/stable deployment from git repository using
>         nova cells and
>         nova-network on each compute cell. We are able to perform
>         standard
>         operations such as start, suspend, terminate vms, etc. but
>         unable to
>         associate a floating ip to it. Checking the [source code](
>         https://github.com/openstack/nova/blob/master/nova/compute/cells_api.py#L438)
>         is clear that this operation is supported in Nova Cells
>         although this
>         method is not called in our setup - we've added logging to
>         this method.
>         
>         Investigating a bit further we found out that floating ip
>         association is
>         done via the [network api](
>         https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/floating_ips.py#L236)
>         and it is not linked to the cell api call. We've tried to
>         change the
>         *network\_api\_class* in nova.conf on the API Cell pointing to
>         ComputeCellsAPI as the documentation shows the same for
>         *compute\_api\_class* but no luck. In our setup we've
>         nova-network running
>         on each compute cell and floating ip management is done on the
>         api cell.
>         
>         Here are my nova.conf files:
>         
>         API Cell:
>         http://paste.openstack.org/show/9lyKm3m3XrkJbT3toB0h/
>         
>         Compute Cell (All-in-one installation):
>         http://paste.openstack.org/show/iXQCviopyTdZIUs81NgG/
>         
>         Is there a way to enable floating ips association in a Nova
>         Cell
>         architecture using nova-network?
>         
>         Many thanks,
>         Bruno.
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/b8d131c6/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 2
>         Date: Wed, 7 Oct 2015 16:01:11 +0200
>         From: Silvia Fichera <fichera.sil at gmail.com>
>         To: Srinivasa Rao Kandula <srinivasaraokandula1 at gmail.com>
>         Cc: "openstack at lists.openstack.org"
>         <openstack at lists.openstack.org>,
>                 openflow-discuss <openflow-discuss at lists.stanford.edu>
>         Subject: Re: [Openstack] OpenStack (Devstack) and Opendaylight
>         Message-ID:
>                 <CAEFdjR3khUv1wJDq1fOqPdCncZVoFYrWiAESZ0N
>         +uvjfcZSZVQ at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi Srinivas,
>         thank you for your suggestions.
>         I've downloaded Lithium and modified my local.conf according
>         to yours but I
>         have a multinode implementation.
>         So I have uncommented the line related to the multihost.
>         The stacking of the controller node was successful... but when
>         I try to
>         create an instance via Horizon  it gave me this error:
>         
>         *Error: * Failed to launch instance "instance": Please try
>         again later
>         [Error: No valid host was found. ]
>         
>         Than I tried to stack the first compute node and it was
>         unsuccessful...
>         
>         "2015-10-07 13:43:32.405 |   File "update.py", line 35, in
>         <module>
>         2015-10-07 13:43:32.405 |     from openstack_requirements
>         import project
>         2015-10-07 13:43:32.405 | ImportError: No module named
>         openstack_requirements
>         2015-10-07 13:43:32.410 | + exit_trap
>         2015-10-07 13:43:32.410 | + local r=1
>         2015-10-07 13:43:32.411 | ++ jobs -p
>         2015-10-07 13:43:32.412 | + jobs=
>         2015-10-07 13:43:32.412 | + [[ -n '' ]]
>         2015-10-07 13:43:32.413 | + kill_spinner
>         2015-10-07 13:43:32.413 | + '[' '!' -z '' ']'
>         2015-10-07 13:43:32.413 | + [[ 1 -ne 0 ]]
>         2015-10-07 13:43:32.413 | + echo 'Error on exit'
>         2015-10-07 13:43:32.413 | Error on exit
>         2015-10-07 13:43:32.413 | + [[ -z /opt/stack/logs ]]
>         2015-10-07 13:43:32.413 |
>         + /home/silvia/devstack/tools/worlddump.py -d
>         /opt/stack/logs
>         2015-10-07 13:43:32.478 | + exit 1"
>         
>         I'm also copying the local.conf related to the nodes:
>         
>         *CONTROLLER:*
>         
>         [[local|localrc]]
>         
>         HOST_IP=10.30.3.231
>         HOST_NAME=ctrl
>         SERVICE_HOST_NAME=$HOST_NAME
>         SERVICE_HOST=$HOST_IP
>         MULTI_HOST=1
>         RECLONE=yes
>         #OFFLINE=true
>         
>         #IMAGE_URLS+="
>         http://launchpad.net/cirros/trunk/0.3.2/+download/cirros-0.3.2-x86_64-uec.tar.gz,http://berrange.fedorapeople.org/images/2012-02$
>         #IMAGE_URLS+="
>         https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-arm64-disk1.img
>         "
>         
>         MYSQL_HOST=$SERVICE_HOST
>         RABBIT_HOST=$SERVICE_HOST
>         GLANCE_HOSTPORT=$SERVICE_HOST:9292
>         KEYSTONE_AUTH_HOST=$SERVICE_HOST
>         KEYSTONE_SERVICE_HOST=$SERVICE_HOST
>         ADMIN_PASSWORD=stack
>         MYSQL_PASSWORD=$ADMIN_PASSWORD
>         RABBIT_PASSWORD=$ADMIN_PASSWORD
>         SERVICE_PASSWORD=$ADMIN_PASSWORD
>         SERVICE_TOKEN=tokennekot
>         
>         SCREEN_LOGDIR=$DEST/logs/screen
>         LOGFILE=$DEST/logs/stack.sh.log
>         #LOG_COLOR=False
>         Q_PLUGIN=ml2
>         Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
>         ENABLE_TENANT_TUNNELS=True
>         Q_ML2_TENANT_NETWORK_TYPE=vxlan
>         
>         
>         VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
>         VNCSERVER_LISTEN=0.0.0.0
>         
>         disable_service n-net
>         disable_service cinder
>         disable_service swift
>         
>         #ENABLED_SERVICES
>         +=,neutron,quantum,q-svc,q-meta,odl-compute,n-cpu,q-dhcp,q-l3
>         ENABLED_SERVICES
>         +=,neutron,q-svc,n-novnc,n-cpu,nova,q-meta,q-dhcp,q-l3,odl-neutron,odl-compute
>         #ENABLED_SERVICES=neutron,nova,n-cpu,n-novnc,rabbit
>         
>         ODL_MODE=externalodl
>         ODL_PROVIDER_MAPPINGS=physnet1:eth0
>         
>         #ODL_MGR_IP=127.0.0.1
>         ODL_MGR_IP=10.30.3.234
>         #ODL_LOCAL_IP=192.168.56.101
>         
>         ODL_PORT=8080
>         ODL_MGR_PORT=6640
>         [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
>         [agent]
>         minimize_polling=True
>         
>         [ml2_odl]
>         url=http://$ODL_MGR_IP:8080/controller/nb/v2/neutron
>         username=admin
>         password=admin
>         
>         *COMPUTE NODE:*
>         
>         [[local|localrc]]
>         
>         HOST_IP=10.30.3.232
>         
>         HOST_NAME=node2
>         SERVICE_HOST=10.30.3.231
>         SERVICE_HOST_NAME=ctrl
>         MULTI_HOST=1
>         RECLONE=yes
>         #OFFLINE=true
>         
>         #IMAGE_URLS+="
>         http://launchpad.net/cirros/trunk/0.3.2/+download/cirros-0.3.2-x86_64-uec.tar.gz,http://berrange.fedorapeople.org/images/2012-02$
>         #IMAGE_URLS+="
>         https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-arm64-disk1.img
>         "
>         
>         MYSQL_HOST=$SERVICE_HOST
>         RABBIT_HOST=$SERVICE_HOST
>         GLANCE_HOSTPORT=$SERVICE_HOST:9292
>         KEYSTONE_AUTH_HOST=$SERVICE_HOST
>         KEYSTONE_SERVICE_HOST=$SERVICE_HOST
>         ADMIN_PASSWORD=stack
>         MYSQL_PASSWORD=$ADMIN_PASSWORD
>         RABBIT_PASSWORD=$ADMIN_PASSWORD
>         SERVICE_PASSWORD=$ADMIN_PASSWORD
>         SERVICE_TOKEN=tokennekot
>         
>         SCREEN_LOGDIR=$DEST/logs/screen
>         LOGFILE=$DEST/logs/stack.sh.log
>         #LOG_COLOR=False
>         
>         Q_PLUGIN=ml2
>         Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
>         ENABLE_TENANT_TUNNELS=True
>         Q_ML2_TENANT_NETWORK_TYPE=vxlan
>         
>         
>         VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
>         VNCSERVER_LISTEN=0.0.0.0
>         
>         ENABLED_SERVICES=neutron,nova,n-cpu,n-novnc,rabbit
>         
>         
>         ODL_PROVIDER_MAPPINGS=physnet1:eth2
>         
>         #ODL_MGR_IP=127.0.0.1
>         ODL_MGR_IP=10.30.3.234
>         
>         Thank you for your help
>         
>         Silvia
>         
>         2015-10-06 19:47 GMT+02:00 Srinivasa Rao Kandula <
>         srinivasaraokandula1 at gmail.com>:
>         
>         > Hi,
>         >
>         >  I have recently brought up the openstack and odl set up
>         with ODL in VM
>         > and Openstack in another VM.
>         >  Here is my local.conf for single node openstack set up for
>         Juno stable
>         > release.
>         >
>         >  I was using the ODL lithium release and installed bellow
>         features
>         >  feature:install odl-base-all odl-aaa-authn odl-restconf
>         odl-nsf-all
>         > odl-adsal-northbound odl-mdsal-apidocs odl-ovsdb-openstack
>         > odl-ovsdb-northbound odl-dlux-core
>         >
>         >
>         >
>         >  [[local|localrc]]
>         > GIT_BASE=https://github.com
>         >
>         > HOST_IP=192.168.0.106
>         > HOST_NAME=devstack
>         > SERVICE_HOST_NAME=$HOST_NAME
>         > SERVICE_HOST=$HOST_IP
>         > #MULTI_HOST=1
>         > #RECLONE=yes
>         > #OFFLINE=true
>         >
>         > #IMAGE_URLS+="
>         >
>         http://launchpad.net/cirros/trunk/0.3.2/+download/cirros-0.3.2-x86_64-uec.tar.gz,http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2
>         > "
>         > #IMAGE_URLS+="
>         >
>         https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-arm64-disk1.img
>         > "
>         >
>         > MYSQL_HOST=$SERVICE_HOST
>         > RABBIT_HOST=$SERVICE_HOST
>         > GLANCE_HOSTPORT=$SERVICE_HOST:9292
>         > KEYSTONE_AUTH_HOST=$SERVICE_HOST
>         > KEYSTONE_SERVICE_HOST=$SERVICE_HOST
>         > ADMIN_PASSWORD=devstack
>         > #ADMIN_PASSWORD=password
>         > MYSQL_PASSWORD=$ADMIN_PASSWORD
>         > RABBIT_PASSWORD=$ADMIN_PASSWORD
>         > SERVICE_PASSWORD=$ADMIN_PASSWORD
>         > SERVICE_TOKEN=tokennekot
>         >
>         > SCREEN_LOGDIR=$DEST/logs/screen
>         > LOGFILE=$DEST/logs/stack.sh.log
>         >
>         >
>         >
>         > Q_PLUGIN=ml2
>         > Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
>         > ENABLE_TENANT_TUNNELS=True
>         > Q_ML2_TENANT_NETWORK_TYPE=vxlan
>         >
>         > VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
>         > VNCSERVER_LISTEN=0.0.0.0
>         >
>         > disable_service n-net
>         > disable_service cinder
>         > disable_service swift
>         >
>         >
>         >
>         >
>         > ENABLED_SERVICES
>         +=,neutron,q-svc,n-novnc,n-cpu,nova,q-meta,q-dhcp,q-l3,odl-compute
>         >
>         >
>         > ODL_MODE=externalodl
>         > ODL_PROVIDER_MAPPINGS=physnet1:eth0
>         >
>         >
>         > ODL_MGR_IP=192.168.0.117
>         >
>         >
>         > ODL_PORT=8080
>         >
>         > ODL_MGR_PORT=6640
>         >
>         > [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
>         > [agent]
>         > minimize_polling=True
>         >
>         > [ml2_odl]
>         > url=http://$ODL_MGR_IP:8080/controller/nb/v2/neutron
>         > username=admin
>         > password=admin
>         >
>         >
>         > Thanks,
>         > Srinivas
>         >
>         > On Tue, Oct 6, 2015 at 12:13 AM, saurabh suman
>         <90.suman at gmail.com> wrote:
>         >
>         >> Looks like you have already stacked once on that system
>         then cleaned it
>         >> but some of python depedency files remained there. Is that
>         the case? Also
>         >> before stacking make sure you ODL is up and running.
>         >>
>         >> Regards,
>         >> Saurav
>         >>
>         >> On Mon, Oct 5, 2015 at 8:21 PM, Silvia Fichera
>         <fichera.sil at gmail.com>
>         >> wrote:
>         >>
>         >>> Hi all,
>         >>> I'm trying to integrate OpenStack with Opendaylight and I
>         was following
>         >>> this guide
>         >>>
>         https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide
>         >>> but I have some errors when I stack.
>         >>> (Something like this : ImportError: No module named
>         >>> openstack_requirements)
>         >>> Moreover the links related to the local.conf are no more
>         avaiable.
>         >>> Do you have an updated guide to use Openstack together
>         with Opendaylight
>         >>> (stable, if it's possible).
>         >>>
>         >>> Thanks a lot
>         >>>
>         >>> --
>         >>> Silvia Fichera
>         >>>
>         >>> _______________________________________________
>         >>> Mailing list:
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>> Post to     : openstack at lists.openstack.org
>         >>> Unsubscribe :
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>>
>         >>>
>         >>
>         >> _______________________________________________
>         >> Mailing list:
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >> Post to     : openstack at lists.openstack.org
>         >> Unsubscribe :
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>
>         >>
>         >
>         
>         
>         --
>         Silvia Fichera
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/ca9f7eec/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 3
>         Date: Wed, 7 Oct 2015 16:58:10 +0000
>         From: James Denton <james.denton at rackspace.com>
>         To: Yngvi P?ll ?orfinnsson <yngvith at siminn.is>
>         Cc: "openstack at lists.openstack.org"
>         <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] LBaaS & VPNaaS
>         Message-ID:
>         <A6246A75-5477-40D4-80A7-055FD907048B at rackspace.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi Yngvi,
>         
>         In my most recent experience with VPNaaS on Kilo, I did the
>         following (all on the controller node):
>         
>         1. Install VPN agent
>         
>         apt-get install neutron-vpnaas-agent
>         
>         2. Edit /etc/neutron/vpn_agent.ini and add the following to
>         configure the device driver:
>         
>         [vpnagent]
>         vpn_device_driver =
>         neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
>         
>         3. Edit /etc/neutron/neutron.conf and add vpnaas to the list
>         of service plugins:
>         
>         service_plugins = router,vpnaas
>         
>         4. Edit /etc/neutron/neutron_vpnaas.conf and configure the
>         service provider:
>         
>         [service_providers]
>         service_provider =
>         VPN:vpnaas:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
>         
>         5. Restart Neutron service:
>         
>         service neutron-server restart
>         
>         6. Update AppArmor profile:
>         
>         sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.charon /etc/apparmor.d/disable/
>         sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.stroke /etc/apparmor.d/disable/
>         service apparmor restart
>         
>         7. Work around https://bugs.launchpad.net/neutron/+bug/1456335
>         <https://bugs.launchpad.net/neutron/+bug/1456335>
>         cat >> /usr/bin/neutron-vpn-netns-wrapper << EOF
>         #!/usr/bin/python2
>         
>         import sys
>         
>         from neutron_vpnaas.services.vpn.common.netns_wrapper import
>         main
>         
>         if __name__ == "__main__":
>             sys.exit(main())
>         EOF
>         
>         8. Set permissions:
>         
>         chmod 755 /usr/bin/neutron-vpn-netns-wrapper
>         
>         9. Restart VPN agent
>         
>         service neutron-vpn-agent restart
>         
>         ??
>         
>         Here are the instructions for LBaaS. Again, this is for Kilo
>         but may work with Juno as well:
>         
>         1. Install agent:
>         
>         apt-get install neutron-lbaas-agent
>         
>         2. Define interface driver. This is specific to OVS or
>         LinuxBridge. Edit the /etc/neutron/lbaas_agent.ini file and
>         add the following:
>         
>         [DEFAULT]
>         interface_driver =
>         neutron.agent.linux.interface.BridgeInterfaceDriver
>         
>         -OR-
>         
>         interface_driver =
>         neutron.agent.linux.interface.OVSInterfaceDriver
>         
>         3. Define the device driver in /etc/neutron/lbaas_agent.ini:
>         
>         [DEFAULT]
>         device_driver =
>         neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
>         
>          4. Define service provider
>         in  /etc/neutron/neutron_lbaas.conf    :
>         
>         [service_providers]
>         service_provider =
>         LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
>         
>         5. Define service plugin in /etc/neutron/neutron.conf:
>         
>         service_plugins = router,vpnaas,lbaas
>         
>         6. Restart Neutron service:
>         
>         service neutron-server restart
>         
>         7. Restart LBaaS agent:
>         
>         service neutron-lbaas-agent restart
>         
>         ??
>         
>         No returns and no warranty! Give it a shot and let me know.
>         
>         James Denton
>         Network Architect
>         Rackspace Private Cloud
>         james.denton at rackspace.com
>         
>         > On Oct 7, 2015, at 5:08 AM, Yngvi P?ll ?orfinnsson
>         <yngvith at siminn.is> wrote:
>         >
>         > OK, thanks a lot Sayaji  ;-)
>         >
>         > Regards
>         > Yngvi
>         > ? <>
>         > From: Sayaji Patil [mailto:sayaji15 at gmail.com]
>         > Sent: 6. okt?ber 2015 18:21
>         > To: Yngvi P?ll ?orfinnsson <yngvith at siminn.is>
>         > Cc: openstack at lists.openstack.org
>         > Subject: Re: [Openstack] LBaaS & VPNaaS
>         >
>         > I was able to get VPNaas working by following this link
>         >
>         > https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
>         <https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall>
>         >
>         > Regards,
>         > Sayaji
>         >
>         > On Tue, Oct 6, 2015 at 3:38 AM, Yngvi P?ll ?orfinnsson
>         <yngvith at siminn.is <mailto:yngvith at siminn.is>> wrote:
>         > Dear all
>         >
>         > Can anyone please advise me on a good ?install guide for
>         Openstack Juno? for
>         > LbaaS and VPNaaS ?
>         > My openstack servers are all Ubuntu 14.04 LTS.
>         >
>         > Best regards
>         > Yngvi
>         >
>         > _______________________________________________
>         > Mailing list:
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>         > Post to     : openstack at lists.openstack.org
>         <mailto:openstack at lists.openstack.org>
>         > Unsubscribe :
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>         >
>         > _______________________________________________
>         > Mailing list:
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         > Post to     : openstack at lists.openstack.org
>         > Unsubscribe :
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/f0f47b37/attachment-0001.html>
>         -------------- next part --------------
>         A non-text attachment was scrubbed...
>         Name: signature.asc
>         Type: application/pgp-signature
>         Size: 455 bytes
>         Desc: Message signed with OpenPGP using GPGMail
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/f0f47b37/attachment-0001.pgp>
>         
>         ------------------------------
>         
>         Message: 4
>         Date: Wed, 7 Oct 2015 18:33:26 +0000
>         From: Tristan Cacqueray <tdecacqu at redhat.com>
>         To: openstack-announce at lists.openstack.org,
>                 openstack at lists.openstack.org
>         Subject: [Openstack] [OSSA 2015-021] Nova network security
>         group
>                 changes are not applied to running instances
>         (CVE-2015-7713)
>         Message-ID: <56156576.1030001 at redhat.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         =======================================================================================
>         OSSA-2015-021: Nova network security group changes are not
>         applied to running instances
>         =======================================================================================
>         
>         :Date: October 06, 2015
>         :CVE: CVE-2015-7713
>         
>         
>         Affects
>         ~~~~~~~
>         - Nova: <=2014.2.3, >=2015.1.0, <=2015.1.1
>         
>         
>         Description
>         ~~~~~~~~~~~
>         Sreekumar S. and Suntao independently reported a vulnerability
>         in Nova
>         network. Security group changes silently fail to be applied to
>         already
>         running instances, potentially resulting in instances not
>         being
>         protected by the security group. All Nova network setups are
>         affected.
>         
>         
>         Patches
>         ~~~~~~~
>         - https://review.openstack.org/222026 (Juno)
>         - https://review.openstack.org/222023 (Kilo)
>         - https://review.openstack.org/222022 (Liberty)
>         
>         
>         Credits
>         ~~~~~~~
>         - Sreekumar S. (CVE-2015-7713)
>         - Suntao (CVE-2015-7713)
>         
>         
>         References
>         ~~~~~~~~~~
>         - https://bugs.launchpad.net/bugs/1491307
>         - https://bugs.launchpad.net/bugs/1484738
>         - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7713
>         
>         
>         Notes
>         ~~~~~
>         - This fix will be included in future 2014.2.4 (juno) and
>         2015.1.2 (kilo)
>           releases.
>         
>         --
>         Tristan Cacqueray
>         OpenStack Vulnerability Management Team
>         
>         -------------- next part --------------
>         A non-text attachment was scrubbed...
>         Name: signature.asc
>         Type: application/pgp-signature
>         Size: 473 bytes
>         Desc: OpenPGP digital signature
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/0161318b/attachment-0001.pgp>
>         
>         ------------------------------
>         
>         Message: 5
>         Date: Wed, 7 Oct 2015 11:33:38 -0700
>         From: Adam Lawson <alawson at aqorn.com>
>         To: Leandro Mendes <theflockers at gmail.com>
>         Cc: openstack <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] Cinder - Ceph RADOS with libvirt+Xen
>         Project
>         Message-ID:
>         
>         <CAJfWK4_3Qj1FYLtY8aCmoggAyw1reQp_nQg=C=9qrMGs4xNrwQ at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         That's the primary reason I've discovered why folks move away;
>         I wasn't
>         sure cloudstack was or wasn't part of the overall picture so I
>         figured just
>         outright mention the purple elephant in the room. ; )
>         
>         
>         Are your VM's able to see the disk when NOT using Cinder+Ceph?
>         
>         
>         
>         *Adam Lawson*
>         
>         AQORN, Inc.
>         427 North Tatnall Street
>         Ste. 58461
>         Wilmington, Delaware 19801-2230
>         Toll-free: (844) 4-AQORN-NOW ext. 101
>         International: +1 302-387-4660
>         Direct: +1 916-246-2072
>         
>         
>         On Tue, Oct 6, 2015 at 9:13 PM, Leandro Mendes
>         <theflockers at gmail.com>
>         wrote:
>         
>         > Hi Adam, actually i already moved from cloudstack. Some
>         things related do
>         > architecture wasn't fitting well regarding my envirment and
>         OpenStack
>         > solves a hundred % of my problems.
>         >
>         > My deployment is working well with Gluster, but i would like
>         to make it
>         > work with Ceph.
>         >
>         > At this time, moving back to cloudstack is not an option.
>         Maybe move to
>         > kvm, but first i'd like to try a little more.
>         > Em 07/10/2015 00:51, "Adam Lawson" <alawson at aqorn.com>
>         escreveu:
>         >
>         >> Are you married to OpenStack? I know xen works beautifully
>         with
>         >> cloudstack... Just throwing it out there...
>         >> On Oct 6, 2015 7:17 PM, "Leandro Mendes"
>         <theflockers at gmail.com> wrote:
>         >>
>         >>> Hi Guys,
>         >>>
>         >>> I would like to know if someone got success deploying
>         Cinder using
>         >>> Libvirt+Xen Project with Ceph RADOS backend.
>         >>>
>         >>> I tried but Xen Project can't find the disk.
>         >>>
>         >>> Did someone tried to do the same?
>         >>>
>         >>> Thanks!
>         >>>
>         >>> _______________________________________________
>         >>> Mailing list:
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>> Post to     : openstack at lists.openstack.org
>         >>> Unsubscribe :
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>>
>         >>>
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151007/8a799587/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 6
>         Date: Thu, 8 Oct 2015 07:19:23 +0330
>         From: Reza Bakhshayeshi <reza.b2008 at gmail.com>
>         To: openstack <openstack at lists.openstack.org>
>         Subject: [Openstack] problem with nova-docker and neutron
>         Message-ID:
>         
>         <CAMGoRG0ACXs0qefMtUFMssdDbFLuPK9TK9GzA300GUKqAb76Cg at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi,
>         
>         I'm trying to setup nova-docker with neutron on Kilo,
>         but I receive the following error in nova-compute.log
>         While instance seems to be created successfully, I can't ping
>         or ssh to
>         container.
>         
>         2015-01-07 22:39:02.514 60733 ERROR
>         novadocker.virt.docker.vifs [-] Failed
>         to attach vif
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Traceback
>         (most recent call last):
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         "/usr/lib/python2.7/site-packages/novadocker/virt/docker/vifs.py", line
>         420, in attach
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         run_as_root=True)
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         "/usr/lib/python2.7/site-packages/nova/utils.py", line 206, in
>         execute
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs     return
>         processutils.execute(*cmd, **kwargs)
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line
>         233, in execute
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         cmd=sanitized_cmd)
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         ProcessExecutionError: Unexpected error while running command.
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Command:
>         sudo nova-rootwrap /etc/nova/rootwrap.conf ip netns exec
>         fd607634a98b5fbfc53c5d7e11c92ef6de3a24cda5b1d8f59e5b56908348c02f ip link
>         set nsf6d79c91-bc address fa:16:3e:4a:57:36
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Exit code: 1
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Stdout: u''
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Stderr:
>         u'"mount --make-rslave /" failed: Permission denied\n'
>         2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         
>         any suggestion would be helpful.
>         
>         Regards,
>         Reza
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/94270a19/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 7
>         Date: Thu, 8 Oct 2015 11:36:02 +0500
>         From: Nasir Mahmood <nasir.mahmood at gmail.com>
>         To: Reza Bakhshayeshi <reza.b2008 at gmail.com>
>         Cc: openstack <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] problem with nova-docker and neutron
>         Message-ID:
>         
>         <CACnWpYP2bMZMWJVy=vvt1fk57rLZudOUn7jAUaLMjZ_mqLR7Gg at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Reza,
>         
>         Looks like your sudo commands are failing due to improper sudo
>         settings for
>         neutron... Try running some sample commands by neutron user
>         and verify the
>         status.  Thanks
>         On 8 Oct 2015 09:01, "Reza Bakhshayeshi"
>         <reza.b2008 at gmail.com> wrote:
>         
>         > Hi,
>         >
>         > I'm trying to setup nova-docker with neutron on Kilo,
>         > but I receive the following error in nova-compute.log
>         > While instance seems to be created successfully, I can't
>         ping or ssh to
>         > container.
>         >
>         > 2015-01-07 22:39:02.514 60733 ERROR
>         novadocker.virt.docker.vifs [-] Failed
>         > to attach vif
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Traceback
>         > (most recent call last):
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         >
>         "/usr/lib/python2.7/site-packages/novadocker/virt/docker/vifs.py", line
>         > 420, in attach
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         > run_as_root=True)
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         > "/usr/lib/python2.7/site-packages/nova/utils.py", line 206,
>         in execute
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs     return
>         > processutils.execute(*cmd, **kwargs)
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs   File
>         >
>         "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line
>         > 233, in execute
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         > cmd=sanitized_cmd)
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         > ProcessExecutionError: Unexpected error while running
>         command.
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Command:
>         > sudo nova-rootwrap /etc/nova/rootwrap.conf ip netns exec
>         >
>         fd607634a98b5fbfc53c5d7e11c92ef6de3a24cda5b1d8f59e5b56908348c02f ip link
>         > set nsf6d79c91-bc address fa:16:3e:4a:57:36
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Exit code:
>         > 1
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Stdout: u''
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs Stderr:
>         > u'"mount --make-rslave /" failed: Permission denied\n'
>         > 2015-01-07 22:39:02.514 60733 TRACE
>         novadocker.virt.docker.vifs
>         >
>         > any suggestion would be helpful.
>         >
>         > Regards,
>         > Reza
>         >
>         > _______________________________________________
>         > Mailing list:
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         > Post to     : openstack at lists.openstack.org
>         > Unsubscribe :
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >
>         >
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/005fd93f/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 8
>         Date: Thu, 8 Oct 2015 10:12:46 +0200
>         From: varun bhatnagar <varun292006 at gmail.com>
>         To: openstack at lists.openstack.org
>         Subject: [Openstack] [OpenStack] Sahara fails to launch hadoop
>         cluster
>         Message-ID:
>                 <CAGxOggGH9QTuLsYbe7cqzDP
>         +c=DjvQPP3a0tTdCtT4Zu0b2KGA at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi,
>         
>         I am using single node OpenStack Kilo setup.
>         I am trying to launch a Hadoop cluster and the cluster is
>         failing after
>         sometime with an error message:
>         
>         *NeutronClientException: 404 Not Found*
>         
>         *The resource could not be found.*
>         
>         
>         Also, when I try to list down the security group using neutron
>         command I
>         get the same error message:
>         *[root at controller ~(keystone_admin)]# neutron
>         security-group-list*
>         *404 Not Found*
>         
>         *The resource could not be found.*
>         
>         I am using nova security group:
>         *[root at controller ~(keystone_admin)]# nova secgroup-list*
>         *+----+---------+-------------+*
>         *| Id | Name    | Description |*
>         *+----+---------+-------------+*
>         *| 1  | default | default     |*
>         *+----+---------+-------------+*
>         
>         Can anyone please help and give any suggestion so that I can
>         move forward
>         and launch my cluster?
>         
>         
>         I am pasting the stack below:
>         
>         *2015-10-07 15:55:27.198 1566 DEBUG neutronclient.v2_0.client
>         [-] Error
>         message: 404 Not Found*
>         
>         *The resource could not be found.*
>         
>         *    _handle_fault_response
>         /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:176*
>         *2015-10-07 15:55:28.469 1566 INFO sahara.utils.general [-]
>         Cluster status
>         has been changed: id=1adbb991-d903-4af8-9fd2-5ed392a253e3, New
>         status=Error*
>         *2015-10-07 15:55:28.470 1566 ERROR sahara.utils.api [-]
>         Request aborted
>         with status code 500 and message 'Internal Server Error'*
>         *2015-10-07 15:55:28.645 1566 ERROR sahara.utils.api [-]
>         Traceback (most
>         recent call last):*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/utils/api.py", line
>         90, in
>         handler*
>         *    return func(**kwargs)*
>         *  File "/usr/lib/python2.7/site-packages/sahara/api/acl.py",
>         line 44, in
>         handler*
>         *    return func(*args, **kwargs)*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/validation.py",
>         line 47, in handler*
>         *    return func(*args, **kwargs)*
>         *  File "/usr/lib/python2.7/site-packages/sahara/api/v10.py",
>         line 50, in
>         clusters_create*
>         *    return
>         u.render(api.create_cluster(data).to_wrapped_dict())*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         112,
>         in create_cluster*
>         *    six.text_type(e))*
>         *  File
>         "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",
>         line 85,
>         in __exit__*
>         *    six.reraise(self.type_, self.value, self.tb)*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         107,
>         in create_cluster*
>         *    quotas.check_cluster(cluster)*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         52, in check_cluster*
>         *    _check_limits(req_limits)*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         73, in _check_limits*
>         *    avail_limits = _get_avail_limits()*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         126, in _get_avail_limits*
>         *    limits.update(_get_neutron_limits())*
>         *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         173, in _get_neutron_limits*
>         *    tenant_id=tenant_id).get('security_groups', [])*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 102, in with_params*
>         *    ret = self.function(instance, *args, **kwargs)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 728, in list_security_groups*
>         *    retrieve_all, **_params)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 307, in list*
>         *    for r in self._pagination(collection, path, **params):*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 320, in _pagination*
>         *    res = self.get(path, params=params)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 293, in get*
>         *    headers=headers, params=params)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 270, in retry_request*
>         *    headers=headers, params=params)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 211, in do_request*
>         *    self._handle_fault_response(status_code, replybody)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 185, in _handle_fault_response*
>         *    exception_handler_v20(status_code, des_error_body)*
>         *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         line 83, in exception_handler_v20*
>         *    message=message)*
>         
>         *NeutronClientException: 404 Not Found*
>         
>         *The resource could not be found.*
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/05b4ef89/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 9
>         Date: Thu, 8 Oct 2015 10:16:42 +0200
>         From: Thierry Carrez <thierry at openstack.org>
>         To: OpenStack Development Mailing List
>                 <openstack-dev at lists.openstack.org>,
>         openstack at lists.openstack.org
>         Subject: [Openstack] [Neutron][Heat] Liberty RC2 available
>         Message-ID: <5616266A.8080309 at openstack.org>
>         Content-Type: text/plain; charset=windows-1252
>         
>         Hello everyone,
>         
>         Due to a number of release-critical issues spotted in Neutron
>         and Heat
>         during RC1 testing (as well as last-minute translations
>         imports), new
>         release candidates were created for Liberty. The list of RC2
>         fixes, as
>         well as RC2 tarballs are available at:
>         
>         https://launchpad.net/neutron/liberty/liberty-rc2
>         https://launchpad.net/heat/liberty/liberty-rc2
>         
>         Unless new release-critical issues are found that warrant a
>         last-minute
>         release candidate respin, these tarballs will be formally
>         released as
>         final "Liberty" versions in a week. You are therefore strongly
>         encouraged to test and validate these tarballs !
>         
>         Alternatively, you can directly test the stable/liberty branch
>         at:
>         http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/liberty
>         http://git.openstack.org/cgit/openstack/heat/log/?h=stable/liberty
>         
>         If you find an issue that could be considered
>         release-critical, please
>         file it at:
>         
>         https://bugs.launchpad.net/neutron/+filebug
>         or
>         https://bugs.launchpad.net/heat/+filebug
>         
>         and tag it *liberty-rc-potential* to bring it to the release
>         crew's
>         attention.
>         
>         Thanks!
>         
>         --
>         Thierry Carrez (ttx)
>         
>         
>         
>         ------------------------------
>         
>         Message: 10
>         Date: Thu, 8 Oct 2015 08:26:14 +0000
>         From: Somanchi Trinath <trinath.somanchi at freescale.com>
>         To: Siwei Zhang <siwzhang at teslamotors.com>,
>                 "openstack at lists.openstack.org"
>         <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] neutron net-list empty
>         Message-ID:
>         
>         <BLUPR0301MB152255DF7C67FF9F3365E9C197350 at BLUPR0301MB1522.namprd03.prod.outlook.com>
>         
>         Content-Type: text/plain; charset="us-ascii"
>         
>         Hi-
>         
>         Did you create any Networks? Also, where the networks created
>         are successful without any errors ?
>         
>         Please check the same.
>         
>         -
>         Trinath
>         
>         From: Siwei Zhang [mailto:siwzhang at teslamotors.com]
>         Sent: Friday, October 02, 2015 10:15 PM
>         To: openstack at lists.openstack.org
>         Subject: [Openstack] neutron net-list empty
>         
>         Hi there,
>         
>         
>         I have installed most of the part of OpenStack.
>         Right now I am in the last part of it: to launch an instance
>         following the instructions as shown in the link below:
>         
>         http://docs.openstack.org/juno/install-guide/install/yum/content/launch-instance-neutron.html
>         
>         However, when I run "neutron net-list" I got an empty list,
>         
>         [root at controller ~]# neutron net-list
>         
>         [root at controller ~]#
>         
>         
>         What should do?
>         
>         Regards,
>         
>         Kevin
>         
>         
>         
>         
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/4d39a96d/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 11
>         Date: Thu, 8 Oct 2015 11:42:27 +0300
>         From: Sergey Reshetnyak <sreshetniak at mirantis.com>
>         To: varun bhatnagar <varun292006 at gmail.com>
>         Cc: openstack at lists.openstack.org
>         Subject: Re: [Openstack] [OpenStack] Sahara fails to launch
>         hadoop
>                 cluster
>         Message-ID:
>                 <CAOB5mPwPCD-K1sVT9zw1
>         +7C=bVn458yH-ZY-BmzbkUMJqBtgPQ at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi,
>         
>         You use neutron or nova-network for networking? If you use
>         nova-network,
>         check that "use_neutron" parameter set to "false" in
>         sahara.conf.
>         
>         2015-10-08 11:12 GMT+03:00 varun bhatnagar
>         <varun292006 at gmail.com>:
>         
>         > Hi,
>         >
>         > I am using single node OpenStack Kilo setup.
>         > I am trying to launch a Hadoop cluster and the cluster is
>         failing after
>         > sometime with an error message:
>         >
>         > *NeutronClientException: 404 Not Found*
>         >
>         > *The resource could not be found.*
>         >
>         >
>         > Also, when I try to list down the security group using
>         neutron command I
>         > get the same error message:
>         > *[root at controller ~(keystone_admin)]# neutron
>         security-group-list*
>         > *404 Not Found*
>         >
>         > *The resource could not be found.*
>         >
>         > I am using nova security group:
>         > *[root at controller ~(keystone_admin)]# nova secgroup-list*
>         > *+----+---------+-------------+*
>         > *| Id | Name    | Description |*
>         > *+----+---------+-------------+*
>         > *| 1  | default | default     |*
>         > *+----+---------+-------------+*
>         >
>         > Can anyone please help and give any suggestion so that I can
>         move forward
>         > and launch my cluster?
>         >
>         >
>         > I am pasting the stack below:
>         >
>         > *2015-10-07 15:55:27.198 1566 DEBUG
>         neutronclient.v2_0.client [-] Error
>         > message: 404 Not Found*
>         >
>         > *The resource could not be found.*
>         >
>         > *    _handle_fault_response
>         > /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:176*
>         > *2015-10-07 15:55:28.469 1566 INFO sahara.utils.general [-]
>         Cluster status
>         > has been changed: id=1adbb991-d903-4af8-9fd2-5ed392a253e3,
>         New status=Error*
>         > *2015-10-07 15:55:28.470 1566 ERROR sahara.utils.api [-]
>         Request aborted
>         > with status code 500 and message 'Internal Server Error'*
>         > *2015-10-07 15:55:28.645 1566 ERROR sahara.utils.api [-]
>         Traceback (most
>         > recent call last):*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/utils/api.py", line
>         90,
>         > in handler*
>         > *    return func(**kwargs)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/api/acl.py", line 44,
>         in
>         > handler*
>         > *    return func(*args, **kwargs)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/validation.py",
>         > line 47, in handler*
>         > *    return func(*args, **kwargs)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/api/v10.py", line 50,
>         in
>         > clusters_create*
>         > *    return
>         u.render(api.create_cluster(data).to_wrapped_dict())*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         > 112, in create_cluster*
>         > *    six.text_type(e))*
>         > *  File
>         "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",
>         line
>         > 85, in __exit__*
>         > *    six.reraise(self.type_, self.value, self.tb)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         > 107, in create_cluster*
>         > *    quotas.check_cluster(cluster)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         > 52, in check_cluster*
>         > *    _check_limits(req_limits)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         > 73, in _check_limits*
>         > *    avail_limits = _get_avail_limits()*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         > 126, in _get_avail_limits*
>         > *    limits.update(_get_neutron_limits())*
>         > *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         > 173, in _get_neutron_limits*
>         > *    tenant_id=tenant_id).get('security_groups', [])*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 102, in with_params*
>         > *    ret = self.function(instance, *args, **kwargs)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 728, in list_security_groups*
>         > *    retrieve_all, **_params)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 307, in list*
>         > *    for r in self._pagination(collection, path, **params):*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 320, in _pagination*
>         > *    res = self.get(path, params=params)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 293, in get*
>         > *    headers=headers, params=params)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 270, in retry_request*
>         > *    headers=headers, params=params)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 211, in do_request*
>         > *    self._handle_fault_response(status_code, replybody)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 185, in _handle_fault_response*
>         > *    exception_handler_v20(status_code, des_error_body)*
>         > *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         > line 83, in exception_handler_v20*
>         > *    message=message)*
>         >
>         > *NeutronClientException: 404 Not Found*
>         >
>         > *The resource could not be found.*
>         >
>         >
>         >
>         > _______________________________________________
>         > Mailing list:
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         > Post to     : openstack at lists.openstack.org
>         > Unsubscribe :
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >
>         >
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/42aa9afe/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 12
>         Date: Thu, 8 Oct 2015 08:54:40 +0000
>         From: Yngvi P?ll ?orfinnsson <yngvith at siminn.is>
>         To: James Denton <james.denton at rackspace.com>
>         Cc: "openstack at lists.openstack.org"
>         <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] LBaaS & VPNaaS
>         Message-ID:
>         <91a98508fc9e43569830e00dc90c4c0a at simi-mbx-04.siminn.is>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hi James
>         Thanks so much ;-)
>         
>         Regards
>         Yngvi
>         
>         From: James Denton [mailto:james.denton at rackspace.com]
>         Sent: 7. okt?ber 2015 16:58
>         To: Yngvi P?ll ?orfinnsson <yngvith at siminn.is>
>         Cc: Sayaji Patil <sayaji15 at gmail.com>;
>         openstack at lists.openstack.org
>         Subject: Re: [Openstack] LBaaS & VPNaaS
>         
>         Hi Yngvi,
>         
>         In my most recent experience with VPNaaS on Kilo, I did the
>         following (all on the controller node):
>         
>         1. Install VPN agent
>         
>         apt-get install neutron-vpnaas-agent
>         
>         2. Edit /etc/neutron/vpn_agent.ini and add the following to
>         configure the device driver:
>         
>         [vpnagent]
>         vpn_device_driver =
>         neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
>         
>         3. Edit /etc/neutron/neutron.conf and add vpnaas to the list
>         of service plugins:
>         
>         service_plugins = router,vpnaas
>         
>         4. Edit /etc/neutron/neutron_vpnaas.conf and configure the
>         service provider:
>         
>         [service_providers]
>         service_provider =
>         VPN:vpnaas:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
>         5. Restart Neutron service:
>         
>         service neutron-server restart
>         
>         6. Update AppArmor profile:
>         
>         sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.charon /etc/apparmor.d/disable/
>         sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.stroke /etc/apparmor.d/disable/
>         service apparmor restart
>         
>         7. Work around https://bugs.launchpad.net/neutron/+bug/1456335
>         cat >> /usr/bin/neutron-vpn-netns-wrapper << EOF
>         #!/usr/bin/python2
>         
>         import sys
>         
>         from neutron_vpnaas.services.vpn.common.netns_wrapper import
>         main
>         
>         if __name__ == "__main__":
>             sys.exit(main())
>         EOF
>         
>         8. Set permissions:
>         
>         chmod 755 /usr/bin/neutron-vpn-netns-wrapper
>         
>         9. Restart VPN agent
>         
>         service neutron-vpn-agent restart
>         
>         ??
>         
>         Here are the instructions for LBaaS. Again, this is for Kilo
>         but may work with Juno as well:
>         
>         1. Install agent:
>         
>         apt-get install neutron-lbaas-agent
>         
>         2. Define interface driver. This is specific to OVS or
>         LinuxBridge. Edit the /etc/neutron/lbaas_agent.ini file and
>         add the following:
>         
>         [DEFAULT]
>         interface_driver =
>         neutron.agent.linux.interface.BridgeInterfaceDriver
>         
>         -OR-
>         
>         interface_driver =
>         neutron.agent.linux.interface.OVSInterfaceDriver
>         3. Define the device driver in /etc/neutron/lbaas_agent.ini:
>         
>         [DEFAULT]
>         device_driver =
>         neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
>          4. Define service provider in
>          /etc/neutron/neutron_lbaas.conf    :
>         
>         [service_providers]
>         service_provider =
>         LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
>         5. Define service plugin in /etc/neutron/neutron.conf:
>         
>         service_plugins = router,vpnaas,lbaas
>         
>         6. Restart Neutron service:
>         
>         service neutron-server restart
>         
>         7. Restart LBaaS agent:
>         
>         service neutron-lbaas-agent restart
>         
>         ??
>         
>         No returns and no warranty! Give it a shot and let me know.
>         
>         James Denton
>         Network Architect
>         Rackspace Private Cloud
>         james.denton at rackspace.com<mailto:james.denton at rackspace.com>
>         
>         On Oct 7, 2015, at 5:08 AM, Yngvi P?ll ?orfinnsson
>         <yngvith at siminn.is<mailto:yngvith at siminn.is>> wrote:
>         
>         OK, thanks a lot Sayaji  ;-)
>         
>         Regards
>         Yngvi
>         
>         From: Sayaji Patil [mailto:sayaji15 at gmail.com]
>         Sent: 6. okt?ber 2015 18:21
>         To: Yngvi P?ll ?orfinnsson
>         <yngvith at siminn.is<mailto:yngvith at siminn.is>>
>         Cc:
>         openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>         Subject: Re: [Openstack] LBaaS & VPNaaS
>         
>         I was able to get VPNaas working by following this link
>         
>         https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
>         
>         Regards,
>         Sayaji
>         
>         On Tue, Oct 6, 2015 at 3:38 AM, Yngvi P?ll ?orfinnsson
>         <yngvith at siminn.is<mailto:yngvith at siminn.is>> wrote:
>         Dear all
>         
>         Can anyone please advise me on a good ?install guide for
>         Openstack Juno? for
>         LbaaS and VPNaaS ?
>         My openstack servers are all Ubuntu 14.04 LTS.
>         
>         Best regards
>         Yngvi
>         
>         _______________________________________________
>         Mailing list:
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         Post to     :
>         openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>         Unsubscribe :
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         
>         _______________________________________________
>         Mailing list:
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         Post to     :
>         openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
>         Unsubscribe :
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/05ed4c82/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 13
>         Date: Thu, 8 Oct 2015 11:02:13 +0200
>         From: Thierry Carrez <thierry at openstack.org>
>         To: OpenStack Development Mailing List
>                 <openstack-dev at lists.openstack.org>,
>         openstack at lists.openstack.org
>         Subject: [Openstack] [Horizon] Liberty RC2 available
>         Message-ID: <56163115.7060602 at openstack.org>
>         Content-Type: text/plain; charset=windows-1252
>         
>         Hello everyone,
>         
>         In order to include last-minute translations updates and fix a
>         couple of
>         issues, a new liberty release candidate was created for
>         Horizon. RC2
>         tarballs are available at:
>         
>         https://launchpad.net/horizon/liberty/liberty-rc2
>         
>         Unless new release-critical issues are found that warrant a
>         last-minute
>         release candidate respin, this tarball will be formally
>         released as the
>         final "Liberty" version on October 15. You are therefore
>         strongly
>         encouraged to test and validate this tarball !
>         
>         Alternatively, you can directly test the stable/liberty branch
>         at:
>         http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty
>         
>         If you find an issue that could be considered
>         release-critical, please
>         file it at:
>         
>         https://bugs.launchpad.net/horizon/+filebug
>         
>         and tag it *liberty-rc-potential* to bring it to the release
>         crew's
>         attention.
>         
>         Thanks!
>         
>         --
>         Thierry Carrez (ttx)
>         
>         
>         
>         ------------------------------
>         
>         Message: 14
>         Date: Thu, 8 Oct 2015 13:00:03 +0200
>         From: varun bhatnagar <varun292006 at gmail.com>
>         To: Sergey Reshetnyak <sreshetniak at mirantis.com>
>         Cc: openstack at lists.openstack.org
>         Subject: Re: [Openstack] [OpenStack] Sahara fails to launch
>         hadoop
>                 cluster
>         Message-ID:
>         
>         <CAGxOggEyknQOLUXof=qgv51WSvO9=MOYEmE_ihovGtC=tUyS=A at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         Hello Sergey,
>         
>         Thanks a lot for the reply.
>         I use neutron networking so the parameter "use_neutron" is set
>         to true.
>         
>         BR,
>         Varun
>         
>         On Thu, Oct 8, 2015 at 10:42 AM, Sergey Reshetnyak
>         <sreshetniak at mirantis.com
>         > wrote:
>         
>         > Hi,
>         >
>         > You use neutron or nova-network for networking? If you use
>         nova-network,
>         > check that "use_neutron" parameter set to "false" in
>         sahara.conf.
>         >
>         > 2015-10-08 11:12 GMT+03:00 varun bhatnagar
>         <varun292006 at gmail.com>:
>         >
>         >> Hi,
>         >>
>         >> I am using single node OpenStack Kilo setup.
>         >> I am trying to launch a Hadoop cluster and the cluster is
>         failing after
>         >> sometime with an error message:
>         >>
>         >> *NeutronClientException: 404 Not Found*
>         >>
>         >> *The resource could not be found.*
>         >>
>         >>
>         >> Also, when I try to list down the security group using
>         neutron command I
>         >> get the same error message:
>         >> *[root at controller ~(keystone_admin)]# neutron
>         security-group-list*
>         >> *404 Not Found*
>         >>
>         >> *The resource could not be found.*
>         >>
>         >> I am using nova security group:
>         >> *[root at controller ~(keystone_admin)]# nova secgroup-list*
>         >> *+----+---------+-------------+*
>         >> *| Id | Name    | Description |*
>         >> *+----+---------+-------------+*
>         >> *| 1  | default | default     |*
>         >> *+----+---------+-------------+*
>         >>
>         >> Can anyone please help and give any suggestion so that I
>         can move forward
>         >> and launch my cluster?
>         >>
>         >>
>         >> I am pasting the stack below:
>         >>
>         >> *2015-10-07 15:55:27.198 1566 DEBUG
>         neutronclient.v2_0.client [-] Error
>         >> message: 404 Not Found*
>         >>
>         >> *The resource could not be found.*
>         >>
>         >> *    _handle_fault_response
>         >> /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:176*
>         >> *2015-10-07 15:55:28.469 1566 INFO sahara.utils.general [-]
>         Cluster
>         >> status has been changed:
>         id=1adbb991-d903-4af8-9fd2-5ed392a253e3, New
>         >> status=Error*
>         >> *2015-10-07 15:55:28.470 1566 ERROR sahara.utils.api [-]
>         Request aborted
>         >> with status code 500 and message 'Internal Server Error'*
>         >> *2015-10-07 15:55:28.645 1566 ERROR sahara.utils.api [-]
>         Traceback (most
>         >> recent call last):*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/utils/api.py", line
>         90,
>         >> in handler*
>         >> *    return func(**kwargs)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/api/acl.py", line 44,
>         in
>         >> handler*
>         >> *    return func(*args, **kwargs)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/validation.py",
>         >> line 47, in handler*
>         >> *    return func(*args, **kwargs)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/api/v10.py", line 50,
>         in
>         >> clusters_create*
>         >> *    return
>         u.render(api.create_cluster(data).to_wrapped_dict())*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         >> 112, in create_cluster*
>         >> *    six.text_type(e))*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",
>         line
>         >> 85, in __exit__*
>         >> *    six.reraise(self.type_, self.value, self.tb)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/api.py", line
>         >> 107, in create_cluster*
>         >> *    quotas.check_cluster(cluster)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         >> 52, in check_cluster*
>         >> *    _check_limits(req_limits)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         >> 73, in _check_limits*
>         >> *    avail_limits = _get_avail_limits()*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         >> 126, in _get_avail_limits*
>         >> *    limits.update(_get_neutron_limits())*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/sahara/service/quotas.py",
>         line
>         >> 173, in _get_neutron_limits*
>         >> *    tenant_id=tenant_id).get('security_groups', [])*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 102, in with_params*
>         >> *    ret = self.function(instance, *args, **kwargs)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 728, in list_security_groups*
>         >> *    retrieve_all, **_params)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 307, in list*
>         >> *    for r in self._pagination(collection, path,
>         **params):*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 320, in _pagination*
>         >> *    res = self.get(path, params=params)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 293, in get*
>         >> *    headers=headers, params=params)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 270, in retry_request*
>         >> *    headers=headers, params=params)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 211, in do_request*
>         >> *    self._handle_fault_response(status_code, replybody)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 185, in _handle_fault_response*
>         >> *    exception_handler_v20(status_code, des_error_body)*
>         >> *  File
>         "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
>         >> line 83, in exception_handler_v20*
>         >> *    message=message)*
>         >>
>         >> *NeutronClientException: 404 Not Found*
>         >>
>         >> *The resource could not be found.*
>         >>
>         >>
>         >>
>         >> _______________________________________________
>         >> Mailing list:
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >> Post to     : openstack at lists.openstack.org
>         >> Unsubscribe :
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>
>         >>
>         >
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/857b7e0b/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 15
>         Date: Thu, 8 Oct 2015 17:00:13 +0530
>         From: saurabh suman <90.suman at gmail.com>
>         To: Adam Lawson <alawson at aqorn.com>
>         Cc: openstack <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] Cinder vs Swift architecture
>         Message-ID:
>                 <CAHoSm6Jb
>         +DVFot_KABwWfR7K_8_tPWzQaXmghuBbCkCHE=Zw3w at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         That was certainly some dirty explanation Adam. Thanks :).
>         
>         What is still unclear to me is the architecture.
>         In all the above storage technology (File ,block and object )
>         we have a
>         physical  a hard drive. Now what are  the layers above them?
>         Like we must
>         have a driver ,then we must format it with some file system
>         (FAT32,NTFS or
>         xfs,). Then what are the layers above them which makes them
>         act as block
>         storage or object storage or file storage.
>         
>         Regards,
>         Saurav
>         
>         
>         On Tue, Oct 6, 2015 at 12:19 AM, Adam Lawson
>         <alawson at aqorn.com> wrote:
>         
>         > Hi Ayushi,
>         >
>         > The quick and dirty explanation:
>         >
>         > Cinder can have different back-ends. With a Linux approach,
>         Cinder uses
>         > LVM2 to carve up local disks so they can be consumed by
>         virtual machines
>         > that need to attach to a volume (for data or to boot). A
>         common way for
>         > VM's to consume these volumes is via iSCSI (see iscsi
>         targets and
>         > initiators for more details). The end result is the VM's
>         talk to the
>         > volumes rather than talking to Cinder to get to the data.
>         >
>         > Swift also carves up local disks and stores data on them but
>         the data
>         > itself is accessed using an API (similar to dropbox). No
>         exceptions. VM's
>         > don't mount the data on the back-end the way they do with
>         Cinder-managed
>         > volumes, but object storage is instead leveraged for things
>         like user
>         > storage, enterprise department data storage (similar to
>         > Accounting/Marketing/IT share drive) or as a back-end for
>         services that
>         > need a safe place to store files for future use like Glance,
>         Cinder
>         > backups, Barbican, etc.
>         >
>         > In terms of how the disks are consumed on a local machine,
>         Cinder allows
>         > the VM to mount and format the drive/volume then use however
>         it wants to.
>         > Swift formats it and presents a pre-formatted volume so you
>         can just drop a
>         > file and get it later.
>         >
>         > Does that help at all?
>         >
>         > //adam
>         >
>         >
>         >
>         >
>         > *Adam Lawson*
>         >
>         > AQORN, Inc.
>         > 427 North Tatnall Street
>         > Ste. 58461
>         > Wilmington, Delaware 19801-2230
>         > Toll-free: (844) 4-AQORN-NOW ext. 101
>         > International: +1 302-387-4660
>         > Direct: +1 916-246-2072
>         >
>         >
>         > On Mon, Oct 5, 2015 at 12:26 AM, Abhishek Shrivastava <
>         > abhishek at cloudbyte.com> wrote:
>         >
>         >> Hi Ayushi,
>         >>
>         >> If you had not gone through this link, then please have a
>         look and let us
>         >> know if it is useful:
>         >>
>         >>    -
>         >>
>         http://www.computerweekly.com/feature/OpenStack-storage-Cinder-and-Swift-explained
>         >>
>         >>
>         >> On Mon, Oct 5, 2015 at 12:27 PM, Ayushi Kumar
>         <ayushi.03march at gmail.com>
>         >> wrote:
>         >>
>         >>> Hi,
>         >>>
>         >>> Even after going through number of links and documents
>         online , I am
>         >>> unable to understand that exactly
>         >>>
>         >>>    - * how file storage , cinder and swift work if I talk
>         in terms of a
>         >>>    personal computer which has openstack insatlled on it.*
>         >>>
>         >>> Can anyone please explain that if I have a hard drive then
>         >>>
>         >>>    - * how cinder uses it for block storage and and how
>         swift uses it
>         >>>    for object storage.*
>         >>>
>         >>> Though the difference between the two storage is clear ,I
>         am unable to
>         >>> understand how the same hard drive is used for both object
>         storage and
>         >>> block storage .
>         >>>
>         >>> Regards
>         >>> Ayushi
>         >>>
>         >>> _______________________________________________
>         >>> Mailing list:
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>> Post to     : openstack at lists.openstack.org
>         >>> Unsubscribe :
>         >>>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>>
>         >>>
>         >>
>         >>
>         >> --
>         >>
>         >>
>         >> *Thanks & Regards,*
>         >> *Abhishek*
>         >> *Cloudbyte Inc. <http://www.cloudbyte.com>*
>         >>
>         >> _______________________________________________
>         >> Mailing list:
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >> Post to     : openstack at lists.openstack.org
>         >> Unsubscribe :
>         >>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >>
>         >>
>         >
>         > _______________________________________________
>         > Mailing list:
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         > Post to     : openstack at lists.openstack.org
>         > Unsubscribe :
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         >
>         >
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/424a2655/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 16
>         Date: Thu, 08 Oct 2015 11:32:58 +0000
>         From: Paul Michali <pc at michali.net>
>         To: James Denton <james.denton at rackspace.com>,  Yngvi
>         P?ll ?orfinnsson
>                 <yngvith at siminn.is>
>         Cc: "openstack at lists.openstack.org"
>         <openstack at lists.openstack.org>
>         Subject: Re: [Openstack] LBaaS & VPNaaS
>         Message-ID:
>                 <CA+ikoRP
>         +X-+ndTbkHs7wv2OzCabWkLZwZAiSE0OcN6_t6UkzRg at mail.gmail.com>
>         Content-Type: text/plain; charset="utf-8"
>         
>         I've mostly run VPNaaS via devstack, and haven't worked with
>         Juno in a long
>         time...
>         
>         See @PCM in-line...
>         
>         
>         
>         On Wed, Oct 7, 2015 at 1:37 PM James Denton
>         <james.denton at rackspace.com>
>         wrote:
>         
>         > Hi Yngvi,
>         >
>         > In my most recent experience with VPNaaS on Kilo, I did the
>         following (all
>         > on the controller node):
>         >
>         > 1. Install VPN agent
>         >
>         > apt-get install neutron-vpnaas-agent
>         >
>         > 2. Edit /etc/neutron/vpn_agent.ini and add the following to
>         configure the
>         > device driver:
>         >
>         > [vpnagent]
>         > vpn_device_driver
>         > =
>         neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
>         >
>         
>         @PCM Two points to consider here, Yngvi. First, do you want to
>         run OpenSwan
>         or StrongSwan based implementation?  Second, I'm not sure how
>         solid
>         StrongSwan was in Juno (it came out in Juno and then after
>         there were some
>         fixes, like IPv6 support, etc).
>         
>         
>         
>         >
>         > 3. Edit /etc/neutron/neutron.conf and add vpnaas to the list
>         of service
>         > plugins:
>         >
>         > service_plugins = router,vpnaas
>         >
>         
>         > 4. Edit /etc/neutron/neutron_vpnaas.conf and configure the
>         service
>         > provider:
>         >
>         > [service_providers]
>         > service_provider =
>         >
>         VPN:vpnaas:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
>         >
>         
>         @PCM James, back in Juno there was no neutron_vpnaas.conf
>         file, IIRC. This
>         would go in neutron.conf
>         
>         
>         
>         > 5. Restart Neutron service:
>         >
>         > service neutron-server restart
>         >
>         > 6. Update AppArmor profile:
>         >
>         > sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.charon /etc/apparmor.d/disable/
>         > sudo ln
>         -sf /etc/apparmor.d/usr.lib.ipsec.stroke /etc/apparmor.d/disable/
>         > service apparmor restart
>         >
>         
>         @PCM Above is only needed for StrongSwan, and not OpenSwan
>         
>         
>         
>         >
>         > 7. Work around
>         https://bugs.launchpad.net/neutron/+bug/1456335
>         >
>         > cat >> /usr/bin/neutron-vpn-netns-wrapper << EOF
>         > #!/usr/bin/python2
>         >
>         > import sys
>         >
>         > from neutron_vpnaas.services.vpn.common.netns_wrapper import
>         main
>         >
>         > if __name__ == "__main__":
>         >     sys.exit(main())
>         > EOF
>         >
>         >
>         8. Set permissions:
>         >
>         > chmod 755 /usr/bin/neutron-vpn-netns-wrapper
>         >
>         
>         @PCM Steps 7 & 8 only for Strongswan, right?
>         
>         
>         
>         >
>         > 9. Restart VPN agent
>         >
>         >
>         -------------- next part --------------
>         An HTML attachment was scrubbed...
>         URL:
>         <http://lists.openstack.org/pipermail/openstack/attachments/20151008/2702e0fd/attachment-0001.html>
>         
>         ------------------------------
>         
>         Message: 17
>         Date: Thu, 08 Oct 2015 14:43:43 +0300
>         From: Georgios Dimitrakakis <giorgis at acmac.uoc.gr>
>         To: <openstack at lists.openstack.org>
>         Subject: [Openstack] Mac Address Question
>         Message-ID: <ec0dd085a2aa0304f7a014610a4d68d0 at acmac.uoc.gr>
>         Content-Type: text/plain; charset=UTF-8; format=flowed
>         
>          Dear all,
>         
>          I am wondering if it's possible to start a VM with a
>         predefined mac
>          address (or a set of VMs from a pool of mac addresses). The
>         reason I
>          want to do it is because I have a license server that permits
>         software
>          running only if
>          the mac address is on the acceptance list.
>         
>          If you have any recommendations I am all ears.
>         
>         
>          Best regards,
>         
>         
>          G.
>         
>         
>         
>         ------------------------------
>         
>         _______________________________________________
>         Openstack mailing list
>         openstack at lists.openstack.org
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         
>         
>         End of Openstack Digest, Vol 28, Issue 8
>         ****************************************
> 
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





More information about the Openstack mailing list