[openstack-dev] [Quantum] Summit Sessions

Ronak Shah ronak at nuagenetworks.net
Fri Mar 29 17:07:23 UTC 2013


Hi,
I have added a blueprint and summit session for ACL.
I am sure many of the plugin builders are interested in this feature.
I would like to discuss this during the summit and make it happen in H.
Please review.

Ronak


On Fri, Mar 29, 2013 at 5:00 AM,
<openstack-dev-request at lists.openstack.org>wrote:

> Send OpenStack-dev mailing list submissions to
>         openstack-dev at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> or, via email, send a message with subject or body 'help' to
>         openstack-dev-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-dev-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-dev digest..."
>
>
> Today's Topics:
>
>    1. Re: [horizon] [*client] WAS [Openstack-stable-maint] Build
>       failed in Jenkins: periodic-horizon-python27-stable-folsom #178
>       (Mark McLoughlin)
>    2. [Doc][LBaaS] API doc for LBaaS extension is ready for review
>       (Ilya Shakhat)
>    3. Re: Applying oslo.wsgi back to each project (Russell Bryant)
>    4. Re: [Quantum] Summit Sessions (Henry Gessau)
>    5. Re: [keystone] naming case sensitive or not? (Dolph Mathews)
>    6.  Volume encryption (Paul Sarin-Pollet)
>    7. Re: PCI-passthrough dev ... (Ian Wells)
>    8. [Savanna] Weekly meeting (#openstack-meeting-alt)
>       (Sergey Lukjanov)
>    9. Announcing Heat grizzly-rc2 (Steven Dake)
>   10. Re: Volume encryption (Bhandaru, Malini K)
>   11. Re: Volume encryption (Caitlin Bestler)
>   12. Re: PCI-passthrough dev ... (Irena Berezovsky)
>   13. Re: PCI-passthrough dev ... (Jiang, Yunhong)
>   14. Re: [keystone] Keystone handling http requests    synchronously
>       (Adam Young)
>   15. Re: [keystone] Keystone handling http requests    synchronously
>       (Mike Wilson)
>   16. Re: PCI-passthrough dev ... (Itzik Brown)
>   17. [Quantum][LBaaS]- - LBaaS Extension in Quantum    Plugin
>       (Pattabi Ayyasami)
>   18. Re: Future of Launchpad Answers [Community] (Stefano Maffulli)
>   19. Re: Future of Launchpad Answers [Community] (Anne Gentle)
>   20. Re: [Quantum] Summit Sessions (Nachi Ueno)
>   21. Re: [Quantum][LBaaS]- - LBaaS Extension in Quantum Plugin
>       (Monty Taylor)
>   22. Re: [EHO] Project name change [Savanna] (Monty Taylor)
>   23. Re: [EHO] Project name change [Savanna] (Monty Taylor)
>   24. Re: [keystone] naming case sensitive or not? (Samuel Merritt)
>   25. Re: [Quantum][LBaaS]- - LBaaS Extension in Quantum        Plugin
>       (Eugene Nikanorov)
>   26. Re: [Doc][LBaaS] API doc for LBaaS extension is ready for
>       review (balaji patnala)
>   27. Re: [Doc][LBaaS] API doc for LBaaS extension is ready for
>       review (Ilya Shakhat)
>   28. Fwd: [keystone] Keystone handling http requests   synchronously
>       (Chmouel Boudjnah)
>   29.  Supporting KMIP in Key Manager (Paul Sarin-Pollet)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 28 Mar 2013 12:13:12 +0000
> From: Mark McLoughlin <markmc at redhat.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Cc: openstack-stable-maint
>         <openstack-stable-maint at lists.openstack.org>
> Subject: Re: [openstack-dev] [horizon] [*client] WAS
>         [Openstack-stable-maint] Build failed in Jenkins:
>         periodic-horizon-python27-stable-folsom #178
> Message-ID: <1364472792.9329.102.camel at sorcha>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi Alan,
>
> On Thu, 2013-03-28 at 12:56 +0100, Alan Pevec wrote:
> > Hi all,
> >
> > horizon stable/folsom started failing after keystoneclient 0.2.3 last
> good was
> >
> https://jenkins.openstack.org/job/periodic-horizon-python27-stable-folsom/168/
> > with keystoneclient 0.2.2.
> >
> > All OpenStack clients are supposed to be backward compatible, so I'm
> > not sure if the solution here is to lock all client version on
> > stable/* or is Horizon using keystoneclient internals which aren't
> > part of stable API?
>
> Thanks for finding this. The client libraries definitely aren't supposed
> to be breaking their APIs.
>
> Maybe we can get the backwards incompatible API change reverted and a
> new release made?
>
> Thanks,
> Mark.
>
> > 2013/3/28 OpenStack Jenkins <jenkins at openstack.org>:
> > > See
> https://jenkins.openstack.org/job/periodic-horizon-python27-stable-folsom/178/
> >
> > ...snip...
> >
> > 2013-03-28 06:04:01.567 | FAIL: test_get_default_role
> > (horizon.tests.api_tests.keystone_tests.RoleAPITests)
> > 2013-03-28 06:04:01.567 |
> > ----------------------------------------------------------------------
> > 2013-03-28 06:04:01.567 | Traceback (most recent call last):
> > 2013-03-28 06:04:01.568 |   File
> >
> "/home/jenkins/workspace/periodic-horizon-python27-stable-folsom/horizon/tests/api_tests/keystone_tests.py",
> > line 76, in test_get_default_role
> > 2013-03-28 06:04:01.568 |     keystoneclient = self.stub_keystoneclient()
> > 2013-03-28 06:04:01.568 |   File
> >
> "/home/jenkins/workspace/periodic-horizon-python27-stable-folsom/horizon/test.py",
> > line 329, in stub_keystoneclient
> > 2013-03-28 06:04:01.568 |     self.keystoneclient =
> > self.mox.CreateMock(keystone_client.Client)
> > 2013-03-28 06:04:01.568 |   File
> >
> "/home/jenkins/workspace/periodic-horizon-python27-stable-folsom/.tox/py27/local/lib/python2.7/site-packages/mox.py",
> > line 258, in CreateMock
> > 2013-03-28 06:04:01.568 |     new_mock = MockObject(class_to_mock,
> attrs=attrs)
> > 2013-03-28 06:04:01.568 |   File
> >
> "/home/jenkins/workspace/periodic-horizon-python27-stable-folsom/.tox/py27/local/lib/python2.7/site-packages/mox.py",
> > line 556, in __init__
> > 2013-03-28 06:04:01.568 |     attr = getattr(class_to_mock, method)
> > 2013-03-28 06:04:01.568 |   File
> >
> "/home/jenkins/workspace/periodic-horizon-python27-stable-folsom/.tox/py27/local/lib/python2.7/site-packages/mox.py",
> > line 608, in __getattr__
> > 2013-03-28 06:04:01.568 |     raise UnknownMethodCallError(name)
> > 2013-03-28 06:04:01.568 | UnknownMethodCallError: Method called is not
> > a member of the object: Method called is not a member of the object:
> > auth_token
> > 2013-03-28 06:04:01.568 | >>  raise UnknownMethodCallError('auth_token')
> >
> > ...snip...
> >
> > 2013-03-28 06:04:03.188 | python-keystoneclient==0.2.3
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 28 Mar 2013 16:13:35 +0400
> From: Ilya Shakhat <ishakhat at mirantis.com>
> To: "OpenStack Development Mailing List
>         (openstack-dev at lists.openstack.org)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Doc][LBaaS] API doc for LBaaS extension is
>         ready   for review
> Message-ID:
>         <
> CAMzOD1+rhmYSm9FEQ5HBdDkfbGPGaSUWbuerbkC_C-fiTmAtYA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>
> Please review a new section in API docs describing LBaaS extension. Review
> is https://review.openstack.org/#/c/25409/
> The text is partially based on
> https://wiki.openstack.org/wiki/Quantum/LBaaS/API_1.0 . Requests and
> responses are captured from traffic between python-client and quantum, thus
> may slightly differ from what documented on wiki.
>
> Thanks,
> Ilya
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/59c524e5/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Thu, 28 Mar 2013 09:07:31 -0400
> From: Russell Bryant <rbryant at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] Applying oslo.wsgi back to each project
> Message-ID: <51544093.1050603 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 03/28/2013 04:15 AM, Zhongyue Luo wrote:
> > Hi all,
> >
> > I noticed that one of the modules being totally ignored in oslo is
> > wsgi.py which should actually be used in most of the OpenStack projects.
> > (Based on zero results from a "find . -name "openstack-common.conf"
> > -exec grep -l wsgi {} \;")
> >
> > Before it's too late, we should apply what is currently available and
> > avoid further divergence of code.
>
> On a related note: http://summit.openstack.org/cfp/details/12
>
> --
> Russell Bryant
>
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 28 Mar 2013 10:00:58 -0400
> From: Henry Gessau <gessau at cisco.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Quantum] Summit Sessions
> Message-ID: <51544D1A.7040202 at cisco.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi Nachi,
>
> Thanks for bringing this to my attention. My initial reaction is that, yes,
> it should be covered by QoS. I will refer to it in my write-up for the QoS
> proposal, and keep in touch with you for a potential merge.
>
> -- Henry
>
> On Wed, Mar 27, at 7:56 pm, Nachi Ueno <nachi at nttmcl.com> wrote:
>
> > Hi
> >
> > I'm also planning to implement related feature in H.
> > BP
> https://blueprints.launchpad.net/quantum/+spec/quantum-basic-traffic-control-on-external-gateway
> >
> > Basically, I wanna stop exhaust of external network connection by one
> tenant
> >
> > May be we can merge our proposals.
> > Your qos api is per port based one?
> >
> > Regards
> > Nachi
> >
> > 2013/3/27 Henry Gessau <gessau at cisco.com>:
> >> I will be adding some more details to the proposal soon.
> >>
> >> -- Henry
> >>
> >> On Wed, Mar 27, at 10:50 am, gong yong sheng <
> gongysh at linux.vnet.ibm.com> wrote:
> >>
> >>> It will help if u can have some design before summit discuss.
> >>> On 03/27/2013 10:33 PM, Sean M. Collins wrote:
> >>>> I'd like to get the QoS API proposal in as well.
> >>>>
> >>>> http://summit.openstack.org/cfp/details/160
> >>>>
> >>>> I am currently working with Comcast, and this is a must-have feature
> in
> >>>> Quantum.
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 28 Mar 2013 10:06:14 -0500
> From: Dolph Mathews <dolph.mathews at gmail.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [keystone] naming case sensitive or not?
> Message-ID:
>         <CAC=
> h7gXJhb7szVksa71UVmS0_AGjF2hZAY9agc7T23DkbBZ3Fw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> That's basically up to the identity driver in use -- for example, with the
> SQL driver, if your database is case sensitive, then keystone will be as
> well.
>
> If the driver is case sensitive, you should have gotten a 409 Conflict back
> on your second example command.
>
>
> -Dolph
>
>
> On Thu, Mar 28, 2013 at 5:57 AM, Hua ZZ Zhang <zhuadl at cn.ibm.com> wrote:
>
> > Dears,
> >
> > I have a question about keystone case sensitive of naming, such as user
> > name, tenant name, role name.
> > Are they case sensitive or not?
> >
> > I test the command below but it failed. so my conclusion is case
> > insensitive.
> > keystone user-create --name Usera --pass xyz
> > keystone user-create --name UserA --pass xyz
> >
> > *Best Regards, *
> >
> > ------------------------------
> >
> >    *Edward Zhang(??)*
> >    Advisory Software Engineer
> >    Software Standards & Open Source Software
> >    Emerging Technology Institute(ETI)
> >    IBM China Software Development Lab
> >    e-mail: zhuadl at cn.ibm.com
> >    Notes ID: Hua ZZ Zhang/China/IBM
> >    Tel: 86-10-82450483
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/19c2c5b8/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: C4797729.gif
> Type: image/gif
> Size: 1279 bytes
> Desc: not available
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/19c2c5b8/attachment-0002.gif
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: ecblank.gif
> Type: image/gif
> Size: 45 bytes
> Desc: not available
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/19c2c5b8/attachment-0003.gif
> >
>
> ------------------------------
>
> Message: 6
> Date: Thu, 28 Mar 2013 17:35:33 +0100
> From: "Paul Sarin-Pollet" <psarpol at gmx.com>
> To: "OpenStack Development Mailing List"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev]  Volume encryption
> Message-ID: <20130328163533.67820 at gmx.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> Dou you think it could be possible to add an option to let the user enter
> his own key ?
> The key would not be stored by the CSP and would be under the user
> responsability.
>
> Thanks
>
> Paul
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/de4629d5/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 7
> Date: Thu, 28 Mar 2013 18:29:46 +0100
> From: Ian Wells <ijw.ubuntu at cack.org.uk>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
> Message-ID:
>         <CAPoubz4UUL=
> WKiDazpukCsU-9tR2B8N2wOCQHgaOV9q+zm6+UQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Chuck's is not the only summit session:
>
> http://summit.openstack.org/cfp/details/81 (the Quantum side of things if
> you're mapping network devices)
>
> ... and I know Mellanox have also been working on that particular side of
> things too.  Perhaps we could all get together at the summit before the
> sessions for a show and tell?  We could equally work it all out in the
> sessions, but it seems it would be hard to lead a session like that without
> having some knowledge up front of what other people have done.
>
> To be fair, our code is basically Vladimir Popovski's (Zardara Storage's)
> work, tidied up and with a scheduler check to make sure there are actually
> available SRIOV functions on the node before scheduling.  It's there, it's
> actually a patch against Folsom at the moment, and it works, but I wouldn't
> lay claim to it necessarily being the One True Way to implement this (and I
> don't think it has tests enough to pass a review).
> --
> Ian.
>
>
>
> On 25 March 2013 16:23, Russell Bryant <rbryant at redhat.com> wrote:
>
> > On 03/24/2013 01:44 PM, Ian Wells wrote:
> > > Yep, we've got code in test at the moment.
> >
> > This is (at least) the second instance that I've heard of where PCI
> > passthrough has been implemented, but code hasn't surfaced yet.  It's
> > really unfortunate to see the duplication of effort happening.
> >
> > Chuck Short also proposed a design summit session on it, presumably to
> > discuss implementing it yet another time:
> >
> >     http://summit.openstack.org/cfp/details/29
> >
> > It would be really nice to get some code out in the open for this.  :-)
> >
> > --
> > Russell Bryant
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/2dcb0afc/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 8
> Date: Thu, 28 Mar 2013 21:46:33 +0400
> From: Sergey Lukjanov <slukjanov at mirantis.com>
> To: "openstack at lists.launchpad.net" <openstack at lists.launchpad.net>,
>         "openstack-dev at lists.openstack.org"
>         <openstack-dev at lists.openstack.org>
> Cc: "savanna-all at lists.launchpad.net"
>         <savanna-all at lists.launchpad.net>, "eho at lists.launchpad.net"
>         <eho at lists.launchpad.net>
> Subject: [openstack-dev] [Savanna] Weekly meeting
>         (#openstack-meeting-alt)
> Message-ID: <07AC732E-CF27-4B8A-B124-C29831BF5E4F at mirantis.com>
> Content-Type: text/plain; charset=us-ascii
>
> Hi,
>
> Today there will be our third weekly community meeting about Savanna at
> 18:00 UTC on irc channel #openstack-meeting-alt at freenode.
>
> Come along.
>
> Sergey Lukjanov
>
>
> ------------------------------
>
> Message: 9
> Date: Thu, 28 Mar 2013 10:49:25 -0700
> From: Steven Dake <sdake at redhat.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] Announcing Heat grizzly-rc2
> Message-ID: <515482A5.9080705 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi folks!
>
> Grizzly rc2 is available for testing.  A big thanks to Steve Baker and
> Steve Hardy for their patches for this release.
>
> Heat rc2 can be downloaded from:
>
>
> https://launchpad.net/heat/grizzly/grizzly-rc2/+download/heat-2013.1.rc2.tar.gz
>
> Regards
> -steve
>
>
>
> ------------------------------
>
> Message: 10
> Date: Thu, 28 Mar 2013 17:58:35 +0000
> From: "Bhandaru, Malini K" <malini.k.bhandaru at intel.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Volume encryption
> Message-ID:
>         <
> EE6FFF4F6C34C84C8C98DD2414EEA47E520A122D at FMSMSX105.amr.corp.intel.com>
>
> Content-Type: text/plain; charset="utf-8"
>
> Paul,
>
> I am guessing you are referring to volume encryption because for plain
> object encryption OpenStack can be oblivious of any encryption,
> Just put/get is adequate with the user taking care of
> encryption/decryption.
>
> The volume APIs could definitely take in an argument with the key-string,
> so during communications, whatever protocol is in effect, the key-string
> will be transmitted using SSL/TLS or IPSEC or in the clear.
> Where we save <key-id> in the meta data for the volume we could instead
> save a marker saying ?EXTERNAL_KEY? or ?USER_KEY? or something to that
> effect. It indicates the volume is encrypted, as opposed to plain text.
>
> Regards
> Malini
> From: Paul Sarin-Pollet [mailto:psarpol at gmx.com]
> Sent: Thursday, March 28, 2013 9:36 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] Volume encryption
>
> Hi all,
>
> Dou you think it could be possible to add an option to let the user enter
> his own key ?
> The key would not be stored by the CSP and would be under the user
> responsibility.
>
> Thanks
>
> Paul
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/46848173/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 11
> Date: Thu, 28 Mar 2013 19:57:06 +0000
> From: Caitlin Bestler <Caitlin.Bestler at nexenta.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Volume encryption
> Message-ID: <719CD19D2B2BFA4CB1B3F00D2A8CDCD0C50C147D at AUSP01DAG0106>
> Content-Type: text/plain; charset="utf-8"
>
>
>
> Paul Sarin-Pollet wrote:
>
> > Dou you think it could be possible to add an option to let the user
> enter his own key ?
> > The key would not be stored by the CSP and would be under the user
> responsability.
>
>
> If the user holds and is responsible for the key, why would the user want
> to communicate
> the key over the network for the purpose of concentrating the
> encrypt/decrypt heavy lifting
> onto the centralized storage server, rather than doing the
> encrypting/decrypting itself?
>
> When the users do not maintain the key is when it makes sense to do the
> encryption/decryption
> on the storage server.
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/05f26419/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 12
> Date: Thu, 28 Mar 2013 19:58:29 +0000
> From: Irena Berezovsky <irenab at mellanox.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
> Message-ID:
>         <9D25E123B44F4A4291F4B5C13DA94E7773520335 at MTLDAG02.mtl.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Ian,
> I think it is a good idea to get together before the design summit
> sessions to share the experience and discuss the work done in SRIOV  area.
> Any idea how to arrange it?
>
> Regards,
> Irena
>
> From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk]
> Sent: Thursday, March 28, 2013 7:39 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
>
> Chuck's is not the only summit session:
>
> http://summit.openstack.org/cfp/details/81 (the Quantum side of things if
> you're mapping network devices)
>
> ... and I know Mellanox have also been working on that particular side of
> things too.  Perhaps we could all get together at the summit before the
> sessions for a show and tell?  We could equally work it all out in the
> sessions, but it seems it would be hard to lead a session like that without
> having some knowledge up front of what other people have done.
>
> To be fair, our code is basically Vladimir Popovski's (Zardara Storage's)
> work, tidied up and with a scheduler check to make sure there are actually
> available SRIOV functions on the node before scheduling.  It's there, it's
> actually a patch against Folsom at the moment, and it works, but I wouldn't
> lay claim to it necessarily being the One True Way to implement this (and I
> don't think it has tests enough to pass a review).
> --
> Ian.
>
>
> On 25 March 2013 16:23, Russell Bryant <rbryant at redhat.com<mailto:
> rbryant at redhat.com>> wrote:
> On 03/24/2013 01:44 PM, Ian Wells wrote:
> > Yep, we've got code in test at the moment.
> This is (at least) the second instance that I've heard of where PCI
> passthrough has been implemented, but code hasn't surfaced yet.  It's
> really unfortunate to see the duplication of effort happening.
>
> Chuck Short also proposed a design summit session on it, presumably to
> discuss implementing it yet another time:
>
>     http://summit.openstack.org/cfp/details/29
>
> It would be really nice to get some code out in the open for this.  :-)
>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/83bc15be/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 13
> Date: Thu, 28 Mar 2013 20:20:03 +0000
> From: "Jiang, Yunhong" <yunhong.jiang at intel.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
> Message-ID:
>         <
> DDCAE26804250545B9934A2056554AA0017A7B0E at ORSMSX107.amr.corp.intel.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
> Yes, http://summit.openstack.org/cfp/details/80 is the one for nova.
>
> If this topic will be accepted, a discussion on mailing list can get more
> input and idea, thus the session will be more effective. And a 'show and
> tell' before the session is a good idea
>
> Thanks
> --jyh
>
> From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk]
> Sent: Thursday, March 28, 2013 10:30 AM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
>
> Chuck's is not the only summit session:
>
> http://summit.openstack.org/cfp/details/81 (the Quantum side of things if
> you're mapping network devices)
>
> ... and I know Mellanox have also been working on that particular side of
> things too.  Perhaps we could all get together at the summit before the
> sessions for a show and tell?  We could equally work it all out in the
> sessions, but it seems it would be hard to lead a session like that without
> having some knowledge up front of what other people have done.
>
> To be fair, our code is basically Vladimir Popovski's (Zardara Storage's)
> work, tidied up and with a scheduler check to make sure there are actually
> available SRIOV functions on the node before scheduling.  It's there, it's
> actually a patch against Folsom at the moment, and it works, but I wouldn't
> lay claim to it necessarily being the One True Way to implement this (and I
> don't think it has tests enough to pass a review).
> --
> Ian.
>
>
> On 25 March 2013 16:23, Russell Bryant <rbryant at redhat.com<mailto:
> rbryant at redhat.com>> wrote:
> On 03/24/2013 01:44 PM, Ian Wells wrote:
> > Yep, we've got code in test at the moment.
> This is (at least) the second instance that I've heard of where PCI
> passthrough has been implemented, but code hasn't surfaced yet.  It's
> really unfortunate to see the duplication of effort happening.
>
> Chuck Short also proposed a design summit session on it, presumably to
> discuss implementing it yet another time:
>
>     http://summit.openstack.org/cfp/details/29
>
> It would be really nice to get some code out in the open for this.  :-)
>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/557edca4/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 14
> Date: Thu, 28 Mar 2013 17:04:21 -0400
> From: Adam Young <ayoung at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [keystone] Keystone handling http
>         requests        synchronously
> Message-ID: <5154B055.4030004 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 03/26/2013 01:34 PM, David Kranz wrote:
> > This is without memcache in auth_token. I was trying to find a way
> > past https://bugs.launchpad.net/keystone/+bug/1020127
> > which I think I now have. I  would appreciate it if you could validate
> > my comment at the end of that ticket. Here, I just thought that the
> > keystone
> > throughput was very low. I know that swift should not be hitting it so
> > hard. If you were referring to using memcache in the keystone server
> > itself then
> You can use memcached as an alternate token  back end, but I have no
> reason to thin it would perform any better than SQL.  It was broken
> until fairly recently, too, so I suspect it is not used much in the wild.
>
>
> > I didn't know you could do that.
> >
> >  -David
> >
> >
> >
> > On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
> >> this seems to be pretty low, do you have memcaching enabled?
> >>
> >> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <david.kranz at qrclab.com>
> >> wrote:
> >>> Related to this, I measured that the rate at which keystone (running
> >>> on a
> >>> real fairly hefty server) can handle the requests coming from the
> >>> auth_token
> >>> middleware (no pki tokens) is about 16/s. That seems pretty low to
> >>> me. Is
> >>> there some other keystone performance problem here, or is that not
> >>> surprising?
> >>>
> >>>   -David
> >>>
> >>>
> >>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
> >>>> Sure, you could do that, of course. Just like you could use
> >>>> gunicorn or
> >>>> some other web server. Just like you could deploy any of the other
> >>>> OpenStack services that way.
> >>>>
> >>>> It would just be nice if one could configure Keystone in the same
> >>>> manner
> >>>> that all the other OpenStack services are configured.
> >>>>
> >>>> -jay
> >>>>
> >>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
> >>>>> See: https://github.com/openstack/keystone/tree/master/httpd
> >>>>>
> >>>>> For example...
> >>>>>
> >>>>> This lets apache do the multiprocess instead of how nova, glance ...
> >>>>> have basically recreated the same mechanism that apache has had for
> >>>>> years.
> >>>>>
> >>>>> Sent from my really tiny device...
> >>>>>
> >>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <
> harlowja at yahoo-inc.com
> >>>>> <mailto:harlowja at yahoo-inc.com>> wrote:
> >>>>>
> >>>>>> Or I think u can run keystone in wsgi+apache easily, thus getting
> >>>>>> u the
> >>>>>> multiprocess support via apache worker processes.
> >>>>>>
> >>>>>> Sent from my really tiny
> >>>>>> device....
> >>>>>>
> >>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <jaypipes at gmail.com
> >>>>>> <mailto:jaypipes at gmail.com>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> Unfortunately, Keystone's WSGI server is only a single process,
> >>>>>> with a
> >>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
> >>>>>> use
> >>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
> >>>>>> Keystone
> >>>>>>> does it differently[2].
> >>>>>>>
> >>>>>>> There was a patchset[3] that added
> >>>>>> multiprocess support to Keystone, but
> >>>>>>> due to objections from termie and
> >>>>>> others about it not being necessary,
> >>>>>>> it died on the vine. Termie even
> >>>>>> noted that Keystone "was designed to be
> >>>>>>> run as multiple instances and load
> >>>>>> balanced over and [he felt] that
> >>>>>>> should be the preferred scaling point".
> >>>>>>>
> >>>>>>> Because the mysql client connection is C-based, calls to it will be
> >>>>>>>
> >>>>>> blocking operations on greenthreads within a single process, meaning
> >>>>>>> even
> >>>>>> if multiple greenthreads are spawned for those 200 incoming
> >>>>>>> requests, they
> >>>>>> will be processed synchronously.
> >>>>>>> The solution is for Keystone to
> >>>>>> implement the same multi-processed WSGI
> >>>>>>> worker stuff that is in the other
> >>>>>> OpenStack projects. Or, diverge from
> >>>>>>> the deployment solution of Nova,
> >>>>>> Glance, Cinder, and Swift, and manually
> >>>>>>> run multiple instances of
> >>>>>> keystone, as Termie suggests.
> >>>>>>> Best,
> >>>>>>> -jay
> >>>>>>>
> >>>>>>> [1] All pretty much
> >>>>>> derived from the original Swift code, with some Oslo
> >>>>>>> improvements around
> >>>>>> config
> >>>>>>> [2] Compare
> >>>>>>>
> >>>>>>
> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
> >>>>>>
> >>>>>> with
> >>>>>>
> >>>>>>
> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
> >>>>>>
> >>>>>> [3] https://review.openstack.org/#/c/7017/
> >>>>>>> On 03/21/2013 07:45 AM,
> >>>>>> Kanade, Rohan wrote:
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> I was trying to create 200 users using
> >>>>>> the keystone client. All the
> >>>>>>>> users are unique and are created on separate
> >>>>>> threads which are started
> >>>>>>>> at the same time.
> >>>>>>>>
> >>>>>>>> keystone is handling
> >>>>>> each request synchronously , i.e. user 1 is
> >>>>>>>> created, then user 2 is
> >>>>>> created ...
> >>>>>>>> Shouldnt  keystone be running a greenthread for each
> >>>>>> request and try to
> >>>>>>>> create these users asynchronously?
> >>>>>>>> like start
> >>>>>> creating user 1 , while handling that request, start creating
> >>>>>>>> user 2 or
> >>>>>> user n...
> >>>>>>>> I have attached the keystone service logs for further
> >>>>>> assistance.
> >>>>>>>> http://paste.openstack.org/show/34216/
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>>
> >>>>>> Disclaimer:This email and any attachments are sent in strictest
> >>>>>> confidence for the sole use of the addressee and may contain legally
> >>>>>> privileged, confidential, and proprietary data. If you are not the
> >>>>>> intended recipient, please advise the sender by replying promptly to
> >>>>>>>> this
> >>>>>> email and then delete and destroy this email and any attachments
> >>>>>>>> without
> >>>>>> any further use, copying or forwarding
> >>>>>>>>
> >>>>>>>>
> >>>>>> _______________________________________________
> >>>>>>>> OpenStack-dev mailing
> >>>>>> list
> >>>>>>>> OpenStack-dev at lists.openstack.org
> >>>>>>>> <mailto:OpenStack-dev at lists.openstack.org>
> >>>>>>>>
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>>> OpenStack-dev mailing
> >>>>>> list
> >>>>>>> OpenStack-dev at lists.openstack.org
> >>>>>>> <mailto:OpenStack-dev at lists.openstack.org>
> >>>>>>>
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> >>>>> _______________________________________________
> >>>>> OpenStack-dev mailing list
> >>>>> OpenStack-dev at lists.openstack.org
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ------------------------------
>
> Message: 15
> Date: Thu, 28 Mar 2013 15:13:43 -0600
> From: Mike Wilson <geekinutah at gmail.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [keystone] Keystone handling http
>         requests        synchronously
> Message-ID:
>         <
> CAFshShPRBooX+5MXDWfiz7mMwetC9X_vXYtp-9H5tFr6yRvhzA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Actually, Bluehost is using it in production. We couldn't get past a couple
> thousand nodes without it because of the amount of requests that the
> quantum network driver produces (5 every periodic interval per compute
> node). It does have some problems if one tenant builds up a large list of
> tokens, but other than that it has been great for us. I think our
> deployment is somewhere around 15,000 nodes right now and it is still
> holding up strong. It is MUCH more performant than just a plain SQL
> backend.
>
>
> On Thu, Mar 28, 2013 at 3:04 PM, Adam Young <ayoung at redhat.com> wrote:
>
> > On 03/26/2013 01:34 PM, David Kranz wrote:
> >
> >> This is without memcache in auth_token. I was trying to find a way past
> >> https://bugs.launchpad.net/**keystone/+bug/1020127<
> https://bugs.launchpad.net/keystone/+bug/1020127>
> >> which I think I now have. I  would appreciate it if you could validate
> my
> >> comment at the end of that ticket. Here, I just thought that the
> keystone
> >> throughput was very low. I know that swift should not be hitting it so
> >> hard. If you were referring to using memcache in the keystone server
> itself
> >> then
> >>
> > You can use memcached as an alternate token  back end, but I have no
> > reason to thin it would perform any better than SQL.  It was broken until
> > fairly recently, too, so I suspect it is not used much in the wild.
> >
> >
> >
> >  I didn't know you could do that.
> >>
> >>  -David
> >>
> >>
> >>
> >> On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
> >>
> >>> this seems to be pretty low, do you have memcaching enabled?
> >>>
> >>> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <david.kranz at qrclab.com>
> >>> wrote:
> >>>
> >>>> Related to this, I measured that the rate at which keystone (running
> on
> >>>> a
> >>>> real fairly hefty server) can handle the requests coming from the
> >>>> auth_token
> >>>> middleware (no pki tokens) is about 16/s. That seems pretty low to me.
> >>>> Is
> >>>> there some other keystone performance problem here, or is that not
> >>>> surprising?
> >>>>
> >>>>   -David
> >>>>
> >>>>
> >>>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
> >>>>
> >>>>> Sure, you could do that, of course. Just like you could use gunicorn
> or
> >>>>> some other web server. Just like you could deploy any of the other
> >>>>> OpenStack services that way.
> >>>>>
> >>>>> It would just be nice if one could configure Keystone in the same
> >>>>> manner
> >>>>> that all the other OpenStack services are configured.
> >>>>>
> >>>>> -jay
> >>>>>
> >>>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
> >>>>>
> >>>>>> See: https://github.com/openstack/**keystone/tree/master/httpd<
> https://github.com/openstack/keystone/tree/master/httpd>
> >>>>>>
> >>>>>> For example...
> >>>>>>
> >>>>>> This lets apache do the multiprocess instead of how nova, glance ...
> >>>>>> have basically recreated the same mechanism that apache has had for
> >>>>>> years.
> >>>>>>
> >>>>>> Sent from my really tiny device...
> >>>>>>
> >>>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <
> harlowja at yahoo-inc.com
> >>>>>> <mailto:harlowja at yahoo-inc.com**>> wrote:
> >>>>>>
> >>>>>>  Or I think u can run keystone in wsgi+apache easily, thus getting u
> >>>>>>> the
> >>>>>>> multiprocess support via apache worker processes.
> >>>>>>>
> >>>>>>> Sent from my really tiny
> >>>>>>> device....
> >>>>>>>
> >>>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <jaypipes at gmail.com
> >>>>>>> <mailto:jaypipes at gmail.com>>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>  Unfortunately, Keystone's WSGI server is only a single process,
> >>>>>>>>
> >>>>>>> with a
> >>>>>>>
> >>>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which
> all
> >>>>>>>>
> >>>>>>> use
> >>>>>>>
> >>>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
> >>>>>>>>
> >>>>>>> Keystone
> >>>>>>>
> >>>>>>>> does it differently[2].
> >>>>>>>>
> >>>>>>>> There was a patchset[3] that added
> >>>>>>>>
> >>>>>>> multiprocess support to Keystone, but
> >>>>>>>
> >>>>>>>> due to objections from termie and
> >>>>>>>>
> >>>>>>> others about it not being necessary,
> >>>>>>>
> >>>>>>>> it died on the vine. Termie even
> >>>>>>>>
> >>>>>>> noted that Keystone "was designed to be
> >>>>>>>
> >>>>>>>> run as multiple instances and load
> >>>>>>>>
> >>>>>>> balanced over and [he felt] that
> >>>>>>>
> >>>>>>>> should be the preferred scaling point".
> >>>>>>>>
> >>>>>>>> Because the mysql client connection is C-based, calls to it will
> be
> >>>>>>>>
> >>>>>>>>  blocking operations on greenthreads within a single process,
> >>>>>>> meaning
> >>>>>>>
> >>>>>>>> even
> >>>>>>>>
> >>>>>>> if multiple greenthreads are spawned for those 200 incoming
> >>>>>>>
> >>>>>>>> requests, they
> >>>>>>>>
> >>>>>>> will be processed synchronously.
> >>>>>>>
> >>>>>>>> The solution is for Keystone to
> >>>>>>>>
> >>>>>>> implement the same multi-processed WSGI
> >>>>>>>
> >>>>>>>> worker stuff that is in the other
> >>>>>>>>
> >>>>>>> OpenStack projects. Or, diverge from
> >>>>>>>
> >>>>>>>> the deployment solution of Nova,
> >>>>>>>>
> >>>>>>> Glance, Cinder, and Swift, and manually
> >>>>>>>
> >>>>>>>> run multiple instances of
> >>>>>>>>
> >>>>>>> keystone, as Termie suggests.
> >>>>>>>
> >>>>>>>> Best,
> >>>>>>>> -jay
> >>>>>>>>
> >>>>>>>> [1] All pretty much
> >>>>>>>>
> >>>>>>> derived from the original Swift code, with some Oslo
> >>>>>>>
> >>>>>>>> improvements around
> >>>>>>>>
> >>>>>>> config
> >>>>>>>
> >>>>>>>> [2] Compare
> >>>>>>>>
> >>>>>>>>  https://github.com/openstack/**glance/blob/master/glance/**
> >>>>>>> common/wsgi.py<
> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py>
> >>>>>>> with
> >>>>>>>
> >>>>>>> https://github.com/openstack/**keystone/blob/master/keystone/**
> >>>>>>> common/wsgi.py<
> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py>
> >>>>>>> [3] https://review.openstack.org/#**/c/7017/<
> https://review.openstack.org/#/c/7017/>
> >>>>>>>
> >>>>>>>> On 03/21/2013 07:45 AM,
> >>>>>>>>
> >>>>>>> Kanade, Rohan wrote:
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> I was trying to create 200 users using
> >>>>>>>>>
> >>>>>>>> the keystone client. All the
> >>>>>>>
> >>>>>>>> users are unique and are created on separate
> >>>>>>>>>
> >>>>>>>> threads which are started
> >>>>>>>
> >>>>>>>> at the same time.
> >>>>>>>>>
> >>>>>>>>> keystone is handling
> >>>>>>>>>
> >>>>>>>> each request synchronously , i.e. user 1 is
> >>>>>>>
> >>>>>>>> created, then user 2 is
> >>>>>>>>>
> >>>>>>>> created ...
> >>>>>>>
> >>>>>>>> Shouldnt  keystone be running a greenthread for each
> >>>>>>>>>
> >>>>>>>> request and try to
> >>>>>>>
> >>>>>>>> create these users asynchronously?
> >>>>>>>>> like start
> >>>>>>>>>
> >>>>>>>> creating user 1 , while handling that request, start creating
> >>>>>>>
> >>>>>>>> user 2 or
> >>>>>>>>>
> >>>>>>>> user n...
> >>>>>>>
> >>>>>>>> I have attached the keystone service logs for further
> >>>>>>>>>
> >>>>>>>> assistance.
> >>>>>>>
> >>>>>>>> http://paste.openstack.org/**show/34216/<
> http://paste.openstack.org/show/34216/>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
>  ______________________________**______________________________**__________
> >>>>>>>
> >>>>>>> Disclaimer:This email and any attachments are sent in strictest
> >>>>>>> confidence for the sole use of the addressee and may contain
> legally
> >>>>>>> privileged, confidential, and proprietary data. If you are not the
> >>>>>>> intended recipient, please advise the sender by replying promptly
> to
> >>>>>>>
> >>>>>>>> this
> >>>>>>>>>
> >>>>>>>> email and then delete and destroy this email and any attachments
> >>>>>>>
> >>>>>>>> without
> >>>>>>>>>
> >>>>>>>> any further use, copying or forwarding
> >>>>>>>
> >>>>>>>>
> >>>>>>>>>
> >>>>>>>>>  ______________________________**_________________
> >>>>>>>
> >>>>>>>> OpenStack-dev mailing
> >>>>>>>>>
> >>>>>>>> list
> >>>>>>>
> >>>>>>>> OpenStack-dev at lists.openstack.**org<
> OpenStack-dev at lists.openstack.org>
> >>>>>>>>> <mailto:OpenStack-dev at lists.**openstack.org<
> OpenStack-dev at lists.openstack.org>
> >>>>>>>>> >
> >>>>>>>>>
> >>>>>>>>>  http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
> >>>>>>> openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>>>>>
> >>>>>>>>
> >>>>>>>>  ______________________________**_________________
> >>>>>>>
> >>>>>>>> OpenStack-dev mailing
> >>>>>>>>
> >>>>>>> list
> >>>>>>>
> >>>>>>>> OpenStack-dev at lists.openstack.**org<
> OpenStack-dev at lists.openstack.org>
> >>>>>>>> <mailto:OpenStack-dev at lists.**openstack.org<
> OpenStack-dev at lists.openstack.org>
> >>>>>>>> >
> >>>>>>>>
> >>>>>>>>  http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
> >>>>>>> openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>>>>>
> >>>>>>>
> >>>>>>>  ______________________________**_________________
> >>>>>> OpenStack-dev mailing list
> >>>>>> OpenStack-dev at lists.openstack.**org<
> OpenStack-dev at lists.openstack.org>
> >>>>>>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>>>>
> >>>>>>  ______________________________**_________________
> >>>>> OpenStack-dev mailing list
> >>>>> OpenStack-dev at lists.openstack.**org<
> OpenStack-dev at lists.openstack.org>
> >>>>>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>>>
> >>>>
> >>>>
> >>>> ______________________________**_________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.**org <
> OpenStack-dev at lists.openstack.org>
> >>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>>
> >>> ______________________________**_________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org
> >
> >>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>>
> >>
> >>
> >> ______________________________**_________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
> >> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>
> >
> >
> > ______________________________**_________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
> > http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/75e55931/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 16
> Date: Thu, 28 Mar 2013 23:22:02 +0200
> From: Itzik Brown <itzikb at dev.mellanox.co.il>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] PCI-passthrough dev ...
> Message-ID: <5154B47A.6020904 at dev.mellanox.co.il>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Ian,
>
> I'm going to send a basic VIF Driver and a config for a network
> configuration using macvtap as part of Mellanox Quantum Plugin.
> We can schedule an online meeting to discuss some of the key points we
> need to address before the summit.
>
> Itzik
>
> On 3/28/2013 7:29 PM, Ian Wells wrote:
> > Chuck's is not the only summit session:
> >
> > http://summit.openstack.org/cfp/details/81 (the Quantum side of things
> > if you're mapping network devices)
> >
> > ... and I know Mellanox have also been working on that particular side
> > of things too.  Perhaps we could all get together at the summit before
> > the sessions for a show and tell?  We could equally work it all out in
> > the sessions, but it seems it would be hard to lead a session like
> > that without having some knowledge up front of what other people have
> > done.
> >
> > To be fair, our code is basically Vladimir Popovski's (Zardara
> > Storage's) work, tidied up and with a scheduler check to make sure
> > there are actually available SRIOV functions on the node before
> > scheduling.  It's there, it's actually a patch against Folsom at the
> > moment, and it works, but I wouldn't lay claim to it necessarily being
> > the One True Way to implement this (and I don't think it has tests
> > enough to pass a review).
> > --
> > Ian.
> >
> >
> >
> > On 25 March 2013 16:23, Russell Bryant <rbryant at redhat.com
> > <mailto:rbryant at redhat.com>> wrote:
> >
> >     On 03/24/2013 01:44 PM, Ian Wells wrote:
> >     > Yep, we've got code in test at the moment.
> >
> >     This is (at least) the second instance that I've heard of where PCI
> >     passthrough has been implemented, but code hasn't surfaced yet.  It's
> >     really unfortunate to see the duplication of effort happening.
> >
> >     Chuck Short also proposed a design summit session on it, presumably
> to
> >     discuss implementing it yet another time:
> >
> >     http://summit.openstack.org/cfp/details/29
> >
> >     It would be really nice to get some code out in the open for this.
> >      :-)
> >
> >     --
> >     Russell Bryant
> >
> >     _______________________________________________
> >     OpenStack-dev mailing list
> >     OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/4697e3f7/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 17
> Date: Thu, 28 Mar 2013 15:14:31 -0700
> From: Pattabi Ayyasami <pattabi at Brocade.com>
> To: "'openstack-dev at lists.openstack.org'
>         (openstack-dev at lists.openstack.org)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Quantum][LBaaS]- - LBaaS Extension in
>         Quantum Plugin
> Message-ID:
>         <
> 62F41AB0AC0AB541BC3C2A731A7788C60140760E11B1 at HQ1-EXCH03.corp.brocade.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
>
> Hi Eugene and All,
>
> I just happened to notice this email thread in my digest. Sorry for the
> late query on this.
> I am kinda lost on this.  Please help me understand.
>
>
> My team is currently working on providing the Brocade LBaaS Driver. We
> currently have implemented the Driver as per the Driver APIs and installing
> the patch as per https://review.openstack.org/#/c/20579 on top of the
> Quantum Code base and validated the functionality end-to-end using the
> Quantum CLIs for the LBaaS as per
> https://wiki.openstack.org/wiki/Quantum/LBaaS/CLI. FYI, our Brocade Load
> Balancer is currently h/w based.
>
> Now, I see that https://review.openstack.org/#/c/20579 review is
> abandoned.  What does it mean now? Driver framework code as suggested by
> https://review.openstack.org/#/c/20579 is no longer applicable?
> Should we now wait for summit to discuss further on the next steps for the
> vendors to integrate their drivers?
>
> Also, I would like to be part of the weekly meetings on LBaaS and where
> can I find the meeting details?
>
> Any detailed clarification on where we stand on supporting LBaaS in
> Quantum for Grizzly and what should the vendors do for the vendor specific
> drivers would greatly help in planning .
>
> Thanks.
> Pattabi
>
> =====================================================================
> Sure. Let us plan again to make it happen in forth coming releases.
>
> Thanks
> Anand
>
> On Mar 14, 2013, at 8:30 AM, "Eugene Nikanorov" <enikanorov at mirantis.com
> <mailto:enikanorov at mirantis.com>> wrote:
>
> Hi Anand,
>
> Unfortunately support for all kinds of LB devices or even driver framework
> for such support appeared to be pretty large feature that has put too much
> reviewing/testing load on quantum's core development team.
> So they proposed alternative solution which is much simpler but supports
> only process-on-host approach.
> I think all that we've discussed was not discarded though.
> But obviously feature-rich LBaaS implementation is moved to the next
> release cycle.
>
> By the way, we've got code that implements initially proposed approach (as
> described on the wiki) so I hope we'll get it merged in Havana much sooner.
> That could allow us to move forward with developing advanced features like
> service types, routed LB insertion, etc.
>
> Thanks,
> Eugene.
>
>
>
> On Thu, Mar 14, 2013 at 7:06 PM, Palanisamy, Anand <apalanisamy at paypal.com
> <mailto:apalanisamy at paypal.com>> wrote:
> Eugene,
>
> First of all, I was surprised that we do not have any support for h/w LBs
> and VIrtual LBs.
>
> Now, we badly get into Architecture discussion again for addressing all
> these concerns before we go for the summit.
>
> Pls let me know suggestions/comments.
>
> Thanks
> Anand
>
> On Mar 14, 2013, at 7:54 AM, "Eugene Nikanorov" <enikanorov at mirantis.com
> <mailto:enikanorov at mirantis.com>> wrote:
>
> I'm afraid there's no detailed description for grizzly lbaas architecture
> right now.
> > So, is this similar to current L3_Agent daemon we have in Quantum for
> Folsom release?
> Correct.
>
> >As well the confusion is like the general Plug-in  and Agent architecture
> [similar to OVS], we have in OS is like Plug-in will be in Controller and
> Agent has to be on the Compute Node.
> Right, lbaas plugin runs within quantum-server, lbaas agent may run on
> network controller or on some compute node (remember that it must run on
> one host only)
>
> >So, when we are trying for Service Insertion, do we need to have same
> architecture as Plug-in and Agent above or it should be generic in such a
> way that independent of underlying Hardware/Products, We should be able to
> bring up services
> I'm not sure I understood your question.
> Currently quantum's lbaas plugin supports the only type of loadbalancer
> and it's not customizible via drivers at this point.
>
> Thanks,
> Eugene.
>
> On Thu, Mar 14, 2013 at 6:09 PM, balaji patnala <patnala003 at gmail.com
> <mailto:patnala003 at gmail.com>> wrote:
> Hi Eugene,
>
>
> >With current lbaas implementation the link that you've provided is not
> actual as current implementation has adopted different architecture.
>
> Can you point me to the links for current implementation details.
>
>
> As well the confusion is like the general Plug-in  and Agent architecture
> [similar to OVS], we have in OS is like Plug-in will be in Controller and
> Agent has to be on the Compute Node.
>
> So, when we are trying for Service Insertion, do we need to have same
> architecture as Plug-in and Agent above or it should be generic in such a
> way that independent of underlying Hardware/Products, We should be able to
> bring up services.
>
> >Current implementation only supports haproxy-on-the-host solution so it's
> not suitable for hardware/VM LBs.
>
> So, is this similar to current L3_Agent daemon we have in Quantum for
> Folsom release?
>
> Thanks,
> Balaji.P
>
> On Thu, Mar 14, 2013 at 5:25 PM, Eugene Nikanorov <enikanorov at mirantis.com
> <mailto:enikanorov at mirantis.com>> wrote:
> Hi Balaji,
>
> With current lbaas implementation the link that you've provided is not
> actual as current implementation has adopted different architecture.
>
> > Can you please through some light on the Agent part of the architecture
> like where exactly the agent will be running like OpenStack Controller Node
> or OpenStack Compute Node.?
> In grizzly, lbaas agent should run on some node - it could be compute node
> or network controller node.
> The only important is that there MUST be only one instance of lbaas agent
> running.
>
> Current implementation only supports haproxy-on-the-host solution so it's
> not suitable for hardware/VM LBs.
> Support for such use case is planned in the next release.
>
> Thanks,
> Eugene.
>
> On Thu, Mar 14, 2013 at 3:46 PM, balaji patnala <patnala003 at gmail.com
> <mailto:patnala003 at gmail.com>> wrote:
> Hi Ilya,
>
> As described in the document given in the below link:
>
> http://wiki.openstack.org/Quantum/LBaaS/Agent
>
> Agent part will be running on Compute Node or Controller Node ?.
>
> I guess it should be on the Controller Node only. As the driver
> abstraction layer is for choosing the right driver for the device it has to
> connect like Driver A to the Device Type A. etc. [This approach is for HW
> device LB]
>
> If we want to have SW-based-LB like SLB VM, does the above architecture is
> valid?
>
> Can you please through some light on the Agent part of the architecture
> like where exactly the agent will be running like OpenStack Controller Node
> or OpenStack Compute Node.?
>
> Thanks in advance.
>
> Regards,
> Balaji.P
>
> On Thu, Feb 7, 2013 at 8:26 PM, Ilya Shakhat <ishakhat at mirantis.com
> <mailto:ishakhat at mirantis.com>> wrote:
> Hi Pattabi,
>
> Code for LBaaS agent and driver API is on review. You may check it from
> Gerrit topic https://review.openstack.org/#/c/20579. Instructions on how
> to run the code in DevStack environment are at
> http://wiki.openstack.org/Quantum/LBaaS/Agent. Driver API is documented
> at http://wiki.openstack.org/Quantum/LBaaS/DriverAPI
>
> Thanks,
> Ilya
>
>
> 2013/2/7 Avishay Balderman <AvishayB at radware.com<mailto:
> AvishayB at radware.com>>
> The basic lbaas driver is not committed yet ? it is under review.
>
> From: Pattabi Ayyasami [mailto:pattabi at Brocade.com<mailto:
> pattabi at Brocade.com>]
> Sent: Thursday, February 07, 2013 3:06 AM
> To: openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Quantum][LBaaS] - LBaaS Extension in Quantum
> Plugin
>
> Hi,
>
> I am in the process of adding vendor specific plugin implementation for
> LBaaS as a service. I have my stand alone driver ready and would like to
> integrate with the framework.
> I looked at the latest Git Hub https://github.com/openstack/quantumrepository. I do not find any code that allows me to hook my plugin code to
> the framework.
>
> Really appreciate if someone could provide me any pointers on how I go
> about doing it.
>
> Regards,
> Pattabi
>
> ================================================================
>
>
>
>
> ------------------------------
>
> Message: 18
> Date: Thu, 28 Mar 2013 15:22:40 -0700
> From: Stefano Maffulli <stefano at openstack.org>
> To: openstack-dev at lists.openstack.org
> Cc: atul jha <koolhead17 at gmail.com>
> Subject: Re: [openstack-dev] Future of Launchpad Answers [Community]
> Message-ID: <5154C2B0.4010707 at openstack.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 03/27/2013 12:08 AM, Jesse Pretorius wrote:
> > Personally I'm in favour of the full import into Ask. That leaves a
> > legacy of information to go back on. The voting for the best answer
> > isn't entirely necessary in my view - over time the voting will come in
> > from people who found the information useful.
>
> This makes sense indeed. Atul was thinking of copying over 'manually'
> the best/most recent ones. I have no idea of what makes more sense, I
> guess it depends on the quantity of questions that are worth moving over.
>
> > On 26 March 2013 17:40, Thierry Carrez wrote:
> >     That makes it certainly possible to select questions ('Answered'
> ones ?)
> >     and import them with all comments, although proper tagging and
> selection
> >     of best answer would be missing... requiring some editorial pass
> >     afterwards.
>
> Atul: what do you think?
>
> >     Alternatively, we can just keep LP answers going for selected
> projects
> >     that made good use of them.
>
> I advice strongly against this. LP has a pretty bad UI, mandates a LP
> login, the product is probably not going to be developed further and
> most of all using different tools will split a nascent community of
> users now that we're increasing the efforts to strengthen it. Let's move
> the good content to Ask and use that.
>
> /stef
>
>
>
> ------------------------------
>
> Message: 19
> Date: Thu, 28 Mar 2013 17:28:48 -0500
> From: Anne Gentle <anne at openstack.org>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Cc: atul jha <koolhead17 at gmail.com>
> Subject: Re: [openstack-dev] Future of Launchpad Answers [Community]
> Message-ID:
>         <
> CAD0KtVFhVw2q8E07jkZzs_O_2SPu-PGX6_pDwPAuREn2Ls32Rg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thu, Mar 28, 2013 at 5:22 PM, Stefano Maffulli <stefano at openstack.org
> >wrote:
>
> > On 03/27/2013 12:08 AM, Jesse Pretorius wrote:
> >
> >> Personally I'm in favour of the full import into Ask. That leaves a
> >> legacy of information to go back on. The voting for the best answer
> >> isn't entirely necessary in my view - over time the voting will come in
> >> from people who found the information useful.
> >>
> >
> > This makes sense indeed. Atul was thinking of copying over 'manually' the
> > best/most recent ones. I have no idea of what makes more sense, I guess
> it
> > depends on the quantity of questions that are worth moving over.
> >
> >
> >  On 26 March 2013 17:40, Thierry Carrez wrote:
> >>     That makes it certainly possible to select questions ('Answered'
> ones
> >> ?)
> >>     and import them with all comments, although proper tagging and
> >> selection
> >>     of best answer would be missing... requiring some editorial pass
> >>     afterwards.
> >>
> >
> > Atul: what do you think?
> >
> >
> >      Alternatively, we can just keep LP answers going for selected
> projects
> >>     that made good use of them.
> >>
> >
> > I advice strongly against this. LP has a pretty bad UI, mandates a LP
> > login, the product is probably not going to be developed further and most
> > of all using different tools will split a nascent community of users now
> > that we're increasing the efforts to strengthen it. Let's move the good
> > content to Ask and use that.
> >
>
> I'm with Stefano on this item.My vote is for changeover with editorial
> pass. Let's a avoid seeing people ask a question about where to post their
> question.
>
> Hopefully that works for you John - so people won't ask to ask, they'll
> just ask. :)
> Anne
>
> >
> > /stef
> >
> >
> > ______________________________**_________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
> > http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130328/92b3819e/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 20
> Date: Thu, 28 Mar 2013 17:14:50 -0700
> From: Nachi Ueno <nachi at nttmcl.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Quantum] Summit Sessions
> Message-ID:
>         <
> CABJepwgf6tVkjAoFVmYK2n+u-yQMgfg4cN56LndVcTJpbjStmw at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi Henry
>
> Thanks! Sounds great
>
> 2013/3/28 Henry Gessau <gessau at cisco.com>:
> > Hi Nachi,
> >
> > Thanks for bringing this to my attention. My initial reaction is that,
> yes,
> > it should be covered by QoS. I will refer to it in my write-up for the
> QoS
> > proposal, and keep in touch with you for a potential merge.
> >
> > -- Henry
> >
> > On Wed, Mar 27, at 7:56 pm, Nachi Ueno <nachi at nttmcl.com> wrote:
> >
> >> Hi
> >>
> >> I'm also planning to implement related feature in H.
> >> BP
> https://blueprints.launchpad.net/quantum/+spec/quantum-basic-traffic-control-on-external-gateway
> >>
> >> Basically, I wanna stop exhaust of external network connection by one
> tenant
> >>
> >> May be we can merge our proposals.
> >> Your qos api is per port based one?
> >>
> >> Regards
> >> Nachi
> >>
> >> 2013/3/27 Henry Gessau <gessau at cisco.com>:
> >>> I will be adding some more details to the proposal soon.
> >>>
> >>> -- Henry
> >>>
> >>> On Wed, Mar 27, at 10:50 am, gong yong sheng <
> gongysh at linux.vnet.ibm.com> wrote:
> >>>
> >>>> It will help if u can have some design before summit discuss.
> >>>> On 03/27/2013 10:33 PM, Sean M. Collins wrote:
> >>>>> I'd like to get the QoS API proposal in as well.
> >>>>>
> >>>>> http://summit.openstack.org/cfp/details/160
> >>>>>
> >>>>> I am currently working with Comcast, and this is a must-have feature
> in
> >>>>> Quantum.
> >>>>>
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> OpenStack-dev mailing list
> >>>>> OpenStack-dev at lists.openstack.org
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ------------------------------
>
> Message: 21
> Date: Fri, 29 Mar 2013 04:24:14 +0100
> From: Monty Taylor <mordred at inaugust.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Quantum][LBaaS]- - LBaaS Extension in
>         Quantum Plugin
> Message-ID: <5155095E.3040604 at inaugust.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 03/28/2013 11:14 PM, Pattabi Ayyasami wrote:
> >
> > Hi Eugene and All,
> >
> > I just happened to notice this email thread in my digest. Sorry for
> > the late query on this. I am kinda lost on this.  Please help me
> > understand.
> >
> >
> > My team is currently working on providing the Brocade LBaaS Driver.
> > We currently have implemented the Driver as per the Driver APIs and
> > installing the patch as per https://review.openstack.org/#/c/20579 on
> > top of the Quantum Code base and validated the functionality
> > end-to-end using the Quantum CLIs for the LBaaS as per
> > https://wiki.openstack.org/wiki/Quantum/LBaaS/CLI. FYI, our Brocade
> > Load Balancer is currently h/w based.
>
> Awesome! Happy to know that you've done that.
>
> > Now, I see that https://review.openstack.org/#/c/20579 review is
> > abandoned.  What does it mean now? Driver framework code as suggested
> > by https://review.openstack.org/#/c/20579 is no longer applicable?
> > Should we now wait for summit to discuss further on the next steps
> > for the vendors to integrate their drivers?
>
> Our system automatically abandons patches with negative reviews after a
> week so that we don't keep a lot of cruft around. In this case though,
> Dan just put a -2 on the review to prevent it from accidentally getting
> merged before we open up development for havana again. As soon as havana
> is open, we can restore that patch and work on getting it applied - so
> no need to worry!
>
> > Also, I would like to be part of the weekly meetings on LBaaS and
> > where can I find the meeting details?
> >
> > Any detailed clarification on where we stand on supporting LBaaS in
> > Quantum for Grizzly and what should the vendors do for the vendor
> > specific drivers would greatly help in planning .
> >
> > Thanks. Pattabi
> >
> > =====================================================================
> >
> >
> Sure. Let us plan again to make it happen in forth coming releases.
> >
> > Thanks Anand
> >
> > On Mar 14, 2013, at 8:30 AM, "Eugene Nikanorov"
> > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote:
> >
> > Hi Anand,
> >
> > Unfortunately support for all kinds of LB devices or even driver
> > framework for such support appeared to be pretty large feature that
> > has put too much reviewing/testing load on quantum's core development
> > team. So they proposed alternative solution which is much simpler but
> > supports only process-on-host approach. I think all that we've
> > discussed was not discarded though. But obviously feature-rich LBaaS
> > implementation is moved to the next release cycle.
> >
> > By the way, we've got code that implements initially proposed
> > approach (as described on the wiki) so I hope we'll get it merged in
> > Havana much sooner. That could allow us to move forward with
> > developing advanced features like service types, routed LB insertion,
> > etc.
> >
> > Thanks, Eugene.
> >
> >
> >
> > On Thu, Mar 14, 2013 at 7:06 PM, Palanisamy, Anand
> > <apalanisamy at paypal.com<mailto:apalanisamy at paypal.com>> wrote:
> > Eugene,
> >
> > First of all, I was surprised that we do not have any support for h/w
> > LBs and VIrtual LBs.
> >
> > Now, we badly get into Architecture discussion again for addressing
> > all these concerns before we go for the summit.
> >
> > Pls let me know suggestions/comments.
> >
> > Thanks Anand
> >
> > On Mar 14, 2013, at 7:54 AM, "Eugene Nikanorov"
> > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote:
> >
> > I'm afraid there's no detailed description for grizzly lbaas
> > architecture right now.
> >> So, is this similar to current L3_Agent daemon we have in Quantum
> >> for Folsom release?
> > Correct.
> >
> >> As well the confusion is like the general Plug-in  and Agent
> >> architecture [similar to OVS], we have in OS is like Plug-in will
> >> be in Controller and Agent has to be on the Compute Node.
> > Right, lbaas plugin runs within quantum-server, lbaas agent may run
> > on network controller or on some compute node (remember that it must
> > run on one host only)
> >
> >> So, when we are trying for Service Insertion, do we need to have
> >> same architecture as Plug-in and Agent above or it should be
> >> generic in such a way that independent of underlying
> >> Hardware/Products, We should be able to bring up services
> > I'm not sure I understood your question. Currently quantum's lbaas
> > plugin supports the only type of loadbalancer and it's not
> > customizible via drivers at this point.
> >
> > Thanks, Eugene.
> >
> > On Thu, Mar 14, 2013 at 6:09 PM, balaji patnala
> > <patnala003 at gmail.com<mailto:patnala003 at gmail.com>> wrote: Hi
> > Eugene,
> >
> >
> >> With current lbaas implementation the link that you've provided is
> >> not actual as current implementation has adopted different
> >> architecture.
> >
> > Can you point me to the links for current implementation details.
> >
> >
> > As well the confusion is like the general Plug-in  and Agent
> > architecture [similar to OVS], we have in OS is like Plug-in will be
> > in Controller and Agent has to be on the Compute Node.
> >
> > So, when we are trying for Service Insertion, do we need to have same
> > architecture as Plug-in and Agent above or it should be generic in
> > such a way that independent of underlying Hardware/Products, We
> > should be able to bring up services.
> >
> >> Current implementation only supports haproxy-on-the-host solution
> >> so it's not suitable for hardware/VM LBs.
> >
> > So, is this similar to current L3_Agent daemon we have in Quantum for
> > Folsom release?
> >
> > Thanks, Balaji.P
> >
> > On Thu, Mar 14, 2013 at 5:25 PM, Eugene Nikanorov
> > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote: Hi
> > Balaji,
> >
> > With current lbaas implementation the link that you've provided is
> > not actual as current implementation has adopted different
> > architecture.
> >
> >> Can you please through some light on the Agent part of the
> >> architecture like where exactly the agent will be running like
> >> OpenStack Controller Node or OpenStack Compute Node.?
> > In grizzly, lbaas agent should run on some node - it could be compute
> > node or network controller node. The only important is that there
> > MUST be only one instance of lbaas agent running.
> >
> > Current implementation only supports haproxy-on-the-host solution so
> > it's not suitable for hardware/VM LBs. Support for such use case is
> > planned in the next release.
> >
> > Thanks, Eugene.
> >
> > On Thu, Mar 14, 2013 at 3:46 PM, balaji patnala
> > <patnala003 at gmail.com<mailto:patnala003 at gmail.com>> wrote: Hi Ilya,
> >
> > As described in the document given in the below link:
> >
> > http://wiki.openstack.org/Quantum/LBaaS/Agent
> >
> > Agent part will be running on Compute Node or Controller Node ?.
> >
> > I guess it should be on the Controller Node only. As the driver
> > abstraction layer is for choosing the right driver for the device it
> > has to connect like Driver A to the Device Type A. etc. [This
> > approach is for HW device LB]
> >
> > If we want to have SW-based-LB like SLB VM, does the above
> > architecture is valid?
> >
> > Can you please through some light on the Agent part of the
> > architecture like where exactly the agent will be running like
> > OpenStack Controller Node or OpenStack Compute Node.?
> >
> > Thanks in advance.
> >
> > Regards, Balaji.P
> >
> > On Thu, Feb 7, 2013 at 8:26 PM, Ilya Shakhat
> > <ishakhat at mirantis.com<mailto:ishakhat at mirantis.com>> wrote: Hi
> > Pattabi,
> >
> > Code for LBaaS agent and driver API is on review. You may check it
> > from Gerrit topic https://review.openstack.org/#/c/20579.
> > Instructions on how to run the code in DevStack environment are at
> > http://wiki.openstack.org/Quantum/LBaaS/Agent. Driver API is
> > documented at http://wiki.openstack.org/Quantum/LBaaS/DriverAPI
> >
> > Thanks, Ilya
> >
> >
> > 2013/2/7 Avishay Balderman
> > <AvishayB at radware.com<mailto:AvishayB at radware.com>> The basic lbaas
> > driver is not committed yet ? it is under review.
> >
> > From: Pattabi Ayyasami
> > [mailto:pattabi at Brocade.com<mailto:pattabi at Brocade.com>] Sent:
> > Thursday, February 07, 2013 3:06 AM To:
> > openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>
> >
> >
> Subject: [openstack-dev] [Quantum][LBaaS] - LBaaS Extension in Quantum
> Plugin
> >
> > Hi,
> >
> > I am in the process of adding vendor specific plugin implementation
> > for LBaaS as a service. I have my stand alone driver ready and would
> > like to integrate with the framework. I looked at the latest Git Hub
> > https://github.com/openstack/quantum repository. I do not find any
> > code that allows me to hook my plugin code to the framework.
> >
> > Really appreciate if someone could provide me any pointers on how I
> > go about doing it.
> >
> > Regards, Pattabi
> >
> > ================================================================
> >
> >
> > _______________________________________________ OpenStack-dev mailing
> > list OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> ------------------------------
>
> Message: 22
> Date: Fri, 29 Mar 2013 04:25:14 +0100
> From: Monty Taylor <mordred at inaugust.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [EHO] Project name change [Savanna]
> Message-ID: <5155099A.8000306 at inaugust.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
>
> On 03/22/2013 08:22 PM, Sergey Lukjanov wrote:
> > Hi everybody,
> >
> > we have changed our project codename to Savanna (from EHO). Here is our
> new wiki page - https://wiki.openstack.org/wiki/Savanna and new site -
> http://savanna.mirantis.com.
> >
> > We decided to do that because of the following reasons:
> > * we don't want to violate trademarks usage of Hadoop and OpenStack
> > * we think that Savanna sounds much better than EHO for the english
> speaking audience
> > * if Savanna will become an integrated OpenStack project, OpenStack
> Savanna is much better than OpenStack Elastic Hadoop on OpenStack
> > * Savanna is the place where elephants leave :)
>
> I like elephants!
>
>
>
> ------------------------------
>
> Message: 23
> Date: Fri, 29 Mar 2013 04:30:37 +0100
> From: Monty Taylor <mordred at inaugust.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [EHO] Project name change [Savanna]
> Message-ID: <51550ADD.7090403 at inaugust.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
>
> On 03/22/2013 08:22 PM, Sergey Lukjanov wrote:
> > Hi everybody,
> >
> > we have changed our project codename to Savanna (from EHO). Here is our
> new wiki page - https://wiki.openstack.org/wiki/Savanna and new site -
> http://savanna.mirantis.com.
> >
> > We decided to do that because of the following reasons:
> > * we don't want to violate trademarks usage of Hadoop and OpenStack
> > * we think that Savanna sounds much better than EHO for the english
> speaking audience
> > * if Savanna will become an integrated OpenStack project, OpenStack
> Savanna is much better than OpenStack Elastic Hadoop on OpenStack
> > * Savanna is the place where elephants leave :)
>
> Hi!
>
> I just looked at your wiki page - it's looking good. Have you guys
> looked at integrating heat into your provisioning step? It seems like if
> you're going to spin up elastic clusters there may be some overlap.
>
> Also, I don't know if you are aware of it, but the TripleO group has
> made a tool in stackforge called diskimage-builder (we're not as good at
> names as you are) We've already worked with the reddwarf guys to make it
> the basis of the service images they deploy. It seems that since you
> deploy images with hadoop pre-installed, a mechanism to describe and
> create those images is going to be something you'll need... we'd love to
> talk with you about it at some point. Are you going to be a the summit?
>
> Monty
>
>
>
> ------------------------------
>
> Message: 24
> Date: Thu, 28 Mar 2013 22:02:02 -0700
> From: Samuel Merritt <sam at swiftstack.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [keystone] naming case sensitive or not?
> Message-ID: <5155204A.8050703 at swiftstack.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 3/28/13 8:06 AM, Dolph Mathews wrote:
> > That's basically up to the identity driver in use -- for example, with
> > the SQL driver, if your database is case sensitive, then keystone will
> > be as well.
>
> That raises an interesting question about authorization with Keystone.
>
> In Swift, we have container ACLs that are of one of three* forms:
>
> (A) tenant_name:user_id
> (B) tenant_id:user_id
> (C) *:user_id
>
> Form A is the interesting one here. Let's say I have a container on
> which I have set a read ACL of "CamelCorp:12345". Then, a request comes
> in, and when Swift's keystoneauth middleware** gets called, it sees that
> the tenant name retrieved from Keystone is "Camelcorp" (different
> case!), and the user id is 12345 (a match).
>
> Should that request be allowed or not?
>
>
> * okay, there's the .r: stuff for referrer-based ACLs, but that's not
> germane to this discussion
>
> ** swift.common.middleware.keystoneauth.KeystoneAuth, for those who wish
> to read the code
>
>
>
> ------------------------------
>
> Message: 25
> Date: Fri, 29 Mar 2013 10:10:13 +0400
> From: Eugene Nikanorov <enikanorov at mirantis.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Quantum][LBaaS]- - LBaaS Extension in
>         Quantum Plugin
> Message-ID:
>         <CAJfiwOSOCFbgagQAwk6WC5BpxBheOACfrwt_-ur-Mo=
> GKZfYng at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, Pattabi,
>
> Yes, it's better to wait for the summit prior to implementing device
> drivers.
> It's seems that there will be two major approaches competing, which
> probably would require different versions of lbaas plugin.
>
> Currently regular lbaas meetings are not held.
> I think it will change after the summit as the direction for the further
> service development will be set.
>
> *> Any detailed clarification on where we stand on supporting LBaaS in
> Quantum for Grizzly and what should the vendors do for the vendor specific
> drivers would greatly help in planning.*
> Current "reference implementation" is focused on HAProxy and does not
> directly support pluggable device drivers.
> In theory, vendor could implement it's device-specific agent the same way
> it's implemented for HAProxy, but i think it's better to wait until
> pluggable drivers support is introduced. All these design questions will be
> discussed at summit.
>
> Thanks,
> Eugene.
>
> On Fri, Mar 29, 2013 at 7:24 AM, Monty Taylor <mordred at inaugust.com>
> wrote:
>
> > On 03/28/2013 11:14 PM, Pattabi Ayyasami wrote:
> > >
> > > Hi Eugene and All,
> > >
> > > I just happened to notice this email thread in my digest. Sorry for
> > > the late query on this. I am kinda lost on this.  Please help me
> > > understand.
> > >
> > >
> > > My team is currently working on providing the Brocade LBaaS Driver.
> > > We currently have implemented the Driver as per the Driver APIs and
> > > installing the patch as per https://review.openstack.org/#/c/20579 on
> > > top of the Quantum Code base and validated the functionality
> > > end-to-end using the Quantum CLIs for the LBaaS as per
> > > https://wiki.openstack.org/wiki/Quantum/LBaaS/CLI. FYI, our Brocade
> > > Load Balancer is currently h/w based.
> >
> > Awesome! Happy to know that you've done that.
> >
> > > Now, I see that https://review.openstack.org/#/c/20579 review is
> > > abandoned.  What does it mean now? Driver framework code as suggested
> > > by https://review.openstack.org/#/c/20579 is no longer applicable?
> > > Should we now wait for summit to discuss further on the next steps
> > > for the vendors to integrate their drivers?
> >
> > Our system automatically abandons patches with negative reviews after a
> > week so that we don't keep a lot of cruft around. In this case though,
> > Dan just put a -2 on the review to prevent it from accidentally getting
> > merged before we open up development for havana again. As soon as havana
> > is open, we can restore that patch and work on getting it applied - so
> > no need to worry!
> >
> > > Also, I would like to be part of the weekly meetings on LBaaS and
> > > where can I find the meeting details?
> > >
> > > Any detailed clarification on where we stand on supporting LBaaS in
> > > Quantum for Grizzly and what should the vendors do for the vendor
> > > specific drivers would greatly help in planning .
> > >
> > > Thanks. Pattabi
> > >
> > > =====================================================================
> > >
> > >
> > Sure. Let us plan again to make it happen in forth coming releases.
> > >
> > > Thanks Anand
> > >
> > > On Mar 14, 2013, at 8:30 AM, "Eugene Nikanorov"
> > > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote:
> > >
> > > Hi Anand,
> > >
> > > Unfortunately support for all kinds of LB devices or even driver
> > > framework for such support appeared to be pretty large feature that
> > > has put too much reviewing/testing load on quantum's core development
> > > team. So they proposed alternative solution which is much simpler but
> > > supports only process-on-host approach. I think all that we've
> > > discussed was not discarded though. But obviously feature-rich LBaaS
> > > implementation is moved to the next release cycle.
> > >
> > > By the way, we've got code that implements initially proposed
> > > approach (as described on the wiki) so I hope we'll get it merged in
> > > Havana much sooner. That could allow us to move forward with
> > > developing advanced features like service types, routed LB insertion,
> > > etc.
> > >
> > > Thanks, Eugene.
> > >
> > >
> > >
> > > On Thu, Mar 14, 2013 at 7:06 PM, Palanisamy, Anand
> > > <apalanisamy at paypal.com<mailto:apalanisamy at paypal.com>> wrote:
> > > Eugene,
> > >
> > > First of all, I was surprised that we do not have any support for h/w
> > > LBs and VIrtual LBs.
> > >
> > > Now, we badly get into Architecture discussion again for addressing
> > > all these concerns before we go for the summit.
> > >
> > > Pls let me know suggestions/comments.
> > >
> > > Thanks Anand
> > >
> > > On Mar 14, 2013, at 7:54 AM, "Eugene Nikanorov"
> > > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote:
> > >
> > > I'm afraid there's no detailed description for grizzly lbaas
> > > architecture right now.
> > >> So, is this similar to current L3_Agent daemon we have in Quantum
> > >> for Folsom release?
> > > Correct.
> > >
> > >> As well the confusion is like the general Plug-in  and Agent
> > >> architecture [similar to OVS], we have in OS is like Plug-in will
> > >> be in Controller and Agent has to be on the Compute Node.
> > > Right, lbaas plugin runs within quantum-server, lbaas agent may run
> > > on network controller or on some compute node (remember that it must
> > > run on one host only)
> > >
> > >> So, when we are trying for Service Insertion, do we need to have
> > >> same architecture as Plug-in and Agent above or it should be
> > >> generic in such a way that independent of underlying
> > >> Hardware/Products, We should be able to bring up services
> > > I'm not sure I understood your question. Currently quantum's lbaas
> > > plugin supports the only type of loadbalancer and it's not
> > > customizible via drivers at this point.
> > >
> > > Thanks, Eugene.
> > >
> > > On Thu, Mar 14, 2013 at 6:09 PM, balaji patnala
> > > <patnala003 at gmail.com<mailto:patnala003 at gmail.com>> wrote: Hi
> > > Eugene,
> > >
> > >
> > >> With current lbaas implementation the link that you've provided is
> > >> not actual as current implementation has adopted different
> > >> architecture.
> > >
> > > Can you point me to the links for current implementation details.
> > >
> > >
> > > As well the confusion is like the general Plug-in  and Agent
> > > architecture [similar to OVS], we have in OS is like Plug-in will be
> > > in Controller and Agent has to be on the Compute Node.
> > >
> > > So, when we are trying for Service Insertion, do we need to have same
> > > architecture as Plug-in and Agent above or it should be generic in
> > > such a way that independent of underlying Hardware/Products, We
> > > should be able to bring up services.
> > >
> > >> Current implementation only supports haproxy-on-the-host solution
> > >> so it's not suitable for hardware/VM LBs.
> > >
> > > So, is this similar to current L3_Agent daemon we have in Quantum for
> > > Folsom release?
> > >
> > > Thanks, Balaji.P
> > >
> > > On Thu, Mar 14, 2013 at 5:25 PM, Eugene Nikanorov
> > > <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote: Hi
> > > Balaji,
> > >
> > > With current lbaas implementation the link that you've provided is
> > > not actual as current implementation has adopted different
> > > architecture.
> > >
> > >> Can you please through some light on the Agent part of the
> > >> architecture like where exactly the agent will be running like
> > >> OpenStack Controller Node or OpenStack Compute Node.?
> > > In grizzly, lbaas agent should run on some node - it could be compute
> > > node or network controller node. The only important is that there
> > > MUST be only one instance of lbaas agent running.
> > >
> > > Current implementation only supports haproxy-on-the-host solution so
> > > it's not suitable for hardware/VM LBs. Support for such use case is
> > > planned in the next release.
> > >
> > > Thanks, Eugene.
> > >
> > > On Thu, Mar 14, 2013 at 3:46 PM, balaji patnala
> > > <patnala003 at gmail.com<mailto:patnala003 at gmail.com>> wrote: Hi Ilya,
> > >
> > > As described in the document given in the below link:
> > >
> > > http://wiki.openstack.org/Quantum/LBaaS/Agent
> > >
> > > Agent part will be running on Compute Node or Controller Node ?.
> > >
> > > I guess it should be on the Controller Node only. As the driver
> > > abstraction layer is for choosing the right driver for the device it
> > > has to connect like Driver A to the Device Type A. etc. [This
> > > approach is for HW device LB]
> > >
> > > If we want to have SW-based-LB like SLB VM, does the above
> > > architecture is valid?
> > >
> > > Can you please through some light on the Agent part of the
> > > architecture like where exactly the agent will be running like
> > > OpenStack Controller Node or OpenStack Compute Node.?
> > >
> > > Thanks in advance.
> > >
> > > Regards, Balaji.P
> > >
> > > On Thu, Feb 7, 2013 at 8:26 PM, Ilya Shakhat
> > > <ishakhat at mirantis.com<mailto:ishakhat at mirantis.com>> wrote: Hi
> > > Pattabi,
> > >
> > > Code for LBaaS agent and driver API is on review. You may check it
> > > from Gerrit topic https://review.openstack.org/#/c/20579.
> > > Instructions on how to run the code in DevStack environment are at
> > > http://wiki.openstack.org/Quantum/LBaaS/Agent. Driver API is
> > > documented at http://wiki.openstack.org/Quantum/LBaaS/DriverAPI
> > >
> > > Thanks, Ilya
> > >
> > >
> > > 2013/2/7 Avishay Balderman
> > > <AvishayB at radware.com<mailto:AvishayB at radware.com>> The basic lbaas
> > > driver is not committed yet ? it is under review.
> > >
> > > From: Pattabi Ayyasami
> > > [mailto:pattabi at Brocade.com<mailto:pattabi at Brocade.com>] Sent:
> > > Thursday, February 07, 2013 3:06 AM To:
> > > openstack-dev at lists.openstack.org<mailto:
> > openstack-dev at lists.openstack.org>
> > >
> > >
> > Subject: [openstack-dev] [Quantum][LBaaS] - LBaaS Extension in Quantum
> > Plugin
> > >
> > > Hi,
> > >
> > > I am in the process of adding vendor specific plugin implementation
> > > for LBaaS as a service. I have my stand alone driver ready and would
> > > like to integrate with the framework. I looked at the latest Git Hub
> > > https://github.com/openstack/quantum repository. I do not find any
> > > code that allows me to hook my plugin code to the framework.
> > >
> > > Really appreciate if someone could provide me any pointers on how I
> > > go about doing it.
> > >
> > > Regards, Pattabi
> > >
> > > ================================================================
> > >
> > >
> > > _______________________________________________ OpenStack-dev mailing
> > > list OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130329/e7b8ba8d/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 26
> Date: Fri, 29 Mar 2013 12:30:09 +0530
> From: balaji patnala <patnala003 at gmail.com>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>, Ilya Shakhat
>         <ishakhat at mirantis.com>
> Subject: Re: [openstack-dev] [Doc][LBaaS] API doc for LBaaS extension
>         is ready for review
> Message-ID:
>         <
> CANT02KRnZzhqLaTZvPZKBdoWMVaYQLmxZbwituArdoEK8hKtuw at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Ilya,
>
> Do we have any blue-print for this. Just want to understand the
> architecture we followed for this.
>
> As this feature has got into multiple discussions and architecture changes.
>
> we should understand the basic architecture so that we can extend the same
> for both HW based SLBs and VM based SLBs.
>
> Regards,
> Balaji.P
>
> On Thu, Mar 28, 2013 at 5:43 PM, Ilya Shakhat <ishakhat at mirantis.com>
> wrote:
>
> > Hi,
> >
> > Please review a new section in API docs describing LBaaS extension.
> Review
> > is https://review.openstack.org/#/c/25409/
> > The text is partially based on
> > https://wiki.openstack.org/wiki/Quantum/LBaaS/API_1.0 . Requests and
> > responses are captured from traffic between python-client and quantum,
> thus
> > may slightly differ from what documented on wiki.
> >
> > Thanks,
> > Ilya
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130329/51e8c860/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 27
> Date: Fri, 29 Mar 2013 13:03:00 +0400
> From: Ilya Shakhat <ishakhat at mirantis.com>
> To: balaji patnala <patnala003 at gmail.com>
> Cc: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Doc][LBaaS] API doc for LBaaS extension
>         is ready for review
> Message-ID:
>         <CAMzOD1JacgCijiaoUTNnzGz3cfiNZq9MAoG1=dH_X2=-
> 13jyRA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Balaji,
>
> There are 3 blueprints directly related to LBaaS:
>  * https://blueprints.launchpad.net/quantum/+spec/lbaas-restapi-tenant -
> REST API as it is specified at
> https://wiki.openstack.org/wiki/Quantum/LBaaS/API_1.0
>  * https://blueprints.launchpad.net/quantum/+spec/lbaas-plugin-api-crud -
> db and plugin
>  * https://blueprints.launchpad.net/quantum/+spec/lbaas-namespace-agent -
> agent and driver for HAProxy
>
> The agent part is less documented, but its designed similar to L3 and DHCP
> agents. The agent polls LB plugin via RPC and retrieves the full
> configuration. If there are changes (new objects in PENDING_CREATE state,
> or updated in PENDING_UPDATE) they are applied to HAProxy. Every pool/vip
> results in 1 haproxy process running on the same host as agent. Haproxy is
> executed in separate IP namespace, thus all load balancers isolated from
> each other from OS and network perspectives. There is exactly 1 haproxy per
> pool/vip.
>
> The roadmap for LB plugin is vast and will be discussed at the Summit.
> Current proposals are at
> https://etherpad.openstack.org/havana-quantum-lbaas
> .
>
>
> Thanks,
> Ilya
>
> 2013/3/29 balaji patnala <patnala003 at gmail.com>
>
> > Hi Ilya,
> >
> > Do we have any blue-print for this. Just want to understand the
> > architecture we followed for this.
> >
> > As this feature has got into multiple discussions and architecture
> changes.
> >
> > we should understand the basic architecture so that we can extend the
> same
> > for both HW based SLBs and VM based SLBs.
> >
> > Regards,
> > Balaji.P
> >
> > On Thu, Mar 28, 2013 at 5:43 PM, Ilya Shakhat <ishakhat at mirantis.com
> >wrote:
> >
> >> Hi,
> >>
> >> Please review a new section in API docs describing LBaaS extension.
> >> Review is https://review.openstack.org/#/c/25409/
> >> The text is partially based on
> >> https://wiki.openstack.org/wiki/Quantum/LBaaS/API_1.0 . Requests and
> >> responses are captured from traffic between python-client and quantum,
> thus
> >> may slightly differ from what documented on wiki.
> >>
> >> Thanks,
> >> Ilya
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130329/5c876e4a/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 28
> Date: Fri, 29 Mar 2013 10:34:24 +0100
> From: Chmouel Boudjnah <chmouel at chmouel.com>
> To: OpenStack Development Mailing List
>         <OpenStack-dev at lists.openstack.org>
> Subject: [openstack-dev] Fwd: [keystone] Keystone handling http
>         requests        synchronously
> Message-ID:
>         <
> CAPeWyqy5qVwCZAs1jLUxqAYpcCHOpwmekwyNXVCprTJSFUUttA at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> FYI
>
>
> ---------- Forwarded message ----------
> From: Adam Young <ayoung at redhat.com>
> Date: Thu, Mar 28, 2013 at 10:04 PM
> Subject: Re: [openstack-dev] [keystone] Keystone handling http
> requests synchronously
> To: openstack-dev at lists.openstack.org
>
>
> On 03/26/2013 01:34 PM, David Kranz wrote:
> >
> > This is without memcache in auth_token. I was trying to find a way past
> https://bugs.launchpad.net/keystone/+bug/1020127
> > which I think I now have. I  would appreciate it if you could validate
> my comment at the end of that ticket. Here, I just thought that the keystone
> > throughput was very low. I know that swift should not be hitting it so
> hard. If you were referring to using memcache in the keystone server itself
> then
>
> You can use memcached as an alternate token  back end, but I have no
> reason to thin it would perform any better than SQL.  It was broken
> until fairly recently, too, so I suspect it is not used much in the
> wild.
>
>
>
> > I didn't know you could do that.
> >
> >  -David
> >
> >
> >
> > On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
> >>
> >> this seems to be pretty low, do you have memcaching enabled?
> >>
> >> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <david.kranz at qrclab.com>
> wrote:
> >>>
> >>> Related to this, I measured that the rate at which keystone (running
> on a
> >>> real fairly hefty server) can handle the requests coming from the
> auth_token
> >>> middleware (no pki tokens) is about 16/s. That seems pretty low to me.
> Is
> >>> there some other keystone performance problem here, or is that not
> >>> surprising?
> >>>
> >>>   -David
> >>>
> >>>
> >>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
> >>>>
> >>>> Sure, you could do that, of course. Just like you could use gunicorn
> or
> >>>> some other web server. Just like you could deploy any of the other
> >>>> OpenStack services that way.
> >>>>
> >>>> It would just be nice if one could configure Keystone in the same
> manner
> >>>> that all the other OpenStack services are configured.
> >>>>
> >>>> -jay
> >>>>
> >>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
> >>>>>
> >>>>> See: https://github.com/openstack/keystone/tree/master/httpd
> >>>>>
> >>>>> For example...
> >>>>>
> >>>>> This lets apache do the multiprocess instead of how nova, glance ...
> >>>>> have basically recreated the same mechanism that apache has had for
> >>>>> years.
> >>>>>
> >>>>> Sent from my really tiny device...
> >>>>>
> >>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <
> harlowja at yahoo-inc.com
> >>>>> <mailto:harlowja at yahoo-inc.com>> wrote:
> >>>>>
> >>>>>> Or I think u can run keystone in wsgi+apache easily, thus getting u
> the
> >>>>>> multiprocess support via apache worker processes.
> >>>>>>
> >>>>>> Sent from my really tiny
> >>>>>> device....
> >>>>>>
> >>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <jaypipes at gmail.com
> >>>>>> <mailto:jaypipes at gmail.com>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> Unfortunately, Keystone's WSGI server is only a single process,
> >>>>>>
> >>>>>> with a
> >>>>>>>
> >>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
> >>>>>>
> >>>>>> use
> >>>>>>>
> >>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
> >>>>>>
> >>>>>> Keystone
> >>>>>>>
> >>>>>>> does it differently[2].
> >>>>>>>
> >>>>>>> There was a patchset[3] that added
> >>>>>>
> >>>>>> multiprocess support to Keystone, but
> >>>>>>>
> >>>>>>> due to objections from termie and
> >>>>>>
> >>>>>> others about it not being necessary,
> >>>>>>>
> >>>>>>> it died on the vine. Termie even
> >>>>>>
> >>>>>> noted that Keystone "was designed to be
> >>>>>>>
> >>>>>>> run as multiple instances and load
> >>>>>>
> >>>>>> balanced over and [he felt] that
> >>>>>>>
> >>>>>>> should be the preferred scaling point".
> >>>>>>>
> >>>>>>> Because the mysql client connection is C-based, calls to it will be
> >>>>>>>
> >>>>>> blocking operations on greenthreads within a single process, meaning
> >>>>>>>
> >>>>>>> even
> >>>>>>
> >>>>>> if multiple greenthreads are spawned for those 200 incoming
> >>>>>>>
> >>>>>>> requests, they
> >>>>>>
> >>>>>> will be processed synchronously.
> >>>>>>>
> >>>>>>> The solution is for Keystone to
> >>>>>>
> >>>>>> implement the same multi-processed WSGI
> >>>>>>>
> >>>>>>> worker stuff that is in the other
> >>>>>>
> >>>>>> OpenStack projects. Or, diverge from
> >>>>>>>
> >>>>>>> the deployment solution of Nova,
> >>>>>>
> >>>>>> Glance, Cinder, and Swift, and manually
> >>>>>>>
> >>>>>>> run multiple instances of
> >>>>>>
> >>>>>> keystone, as Termie suggests.
> >>>>>>>
> >>>>>>> Best,
> >>>>>>> -jay
> >>>>>>>
> >>>>>>> [1] All pretty much
> >>>>>>
> >>>>>> derived from the original Swift code, with some Oslo
> >>>>>>>
> >>>>>>> improvements around
> >>>>>>
> >>>>>> config
> >>>>>>>
> >>>>>>> [2] Compare
> >>>>>>>
> >>>>>>
> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
> >>>>>> with
> >>>>>>
> >>>>>>
> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
> >>>>>> [3] https://review.openstack.org/#/c/7017/
> >>>>>>>
> >>>>>>> On 03/21/2013 07:45 AM,
> >>>>>>
> >>>>>> Kanade, Rohan wrote:
> >>>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> I was trying to create 200 users using
> >>>>>>
> >>>>>> the keystone client. All the
> >>>>>>>>
> >>>>>>>> users are unique and are created on separate
> >>>>>>
> >>>>>> threads which are started
> >>>>>>>>
> >>>>>>>> at the same time.
> >>>>>>>>
> >>>>>>>> keystone is handling
> >>>>>>
> >>>>>> each request synchronously , i.e. user 1 is
> >>>>>>>>
> >>>>>>>> created, then user 2 is
> >>>>>>
> >>>>>> created ...
> >>>>>>>>
> >>>>>>>> Shouldnt  keystone be running a greenthread for each
> >>>>>>
> >>>>>> request and try to
> >>>>>>>>
> >>>>>>>> create these users asynchronously?
> >>>>>>>> like start
> >>>>>>
> >>>>>> creating user 1 , while handling that request, start creating
> >>>>>>>>
> >>>>>>>> user 2 or
> >>>>>>
> >>>>>> user n...
> >>>>>>>>
> >>>>>>>> I have attached the keystone service logs for further
> >>>>>>
> >>>>>> assistance.
> >>>>>>>>
> >>>>>>>> http://paste.openstack.org/show/34216/
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> ______________________________________________________________________
> >>>>>> Disclaimer:This email and any attachments are sent in strictest
> >>>>>> confidence for the sole use of the addressee and may contain legally
> >>>>>> privileged, confidential, and proprietary data. If you are not the
> >>>>>> intended recipient, please advise the sender by replying promptly to
> >>>>>>>>
> >>>>>>>> this
> >>>>>>
> >>>>>> email and then delete and destroy this email and any attachments
> >>>>>>>>
> >>>>>>>> without
> >>>>>>
> >>>>>> any further use, copying or forwarding
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>> _______________________________________________
> >>>>>>>>
> >>>>>>>> OpenStack-dev mailing
> >>>>>>
> >>>>>> list
> >>>>>>>>
> >>>>>>>> OpenStack-dev at lists.openstack.org
> >>>>>>>> <mailto:OpenStack-dev at lists.openstack.org>
> >>>>>>>>
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>>
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>>>
> >>>>>>> OpenStack-dev mailing
> >>>>>>
> >>>>>> list
> >>>>>>>
> >>>>>>> OpenStack-dev at lists.openstack.org
> >>>>>>> <mailto:OpenStack-dev at lists.openstack.org>
> >>>>>>>
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>>
> >>>>> _______________________________________________
> >>>>> OpenStack-dev mailing list
> >>>>> OpenStack-dev at lists.openstack.org
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ------------------------------
>
> Message: 29
> Date: Fri, 29 Mar 2013 11:51:23 +0100
> From: "Paul Sarin-Pollet" <psarpol at gmx.com>
> To: "OpenStack Development Mailing List"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev]  Supporting KMIP in Key Manager
> Message-ID: <20130329105123.93160 at gmx.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi
>
> Malini, I saw your design summit about KMIP.
> Do you know a good opensource implementation of KMIP server ?
> The main issue will be for the validation process...
>
> Thanks
>
> Paul
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20130329/4014b838/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> End of OpenStack-dev Digest, Vol 11, Issue 34
> *********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130329/65718867/attachment-0001.html>


More information about the OpenStack-dev mailing list