[openstack-dev] [OSSG] Best tool for simple security gate checks

Clint Byrum clint at fewbar.com
Thu Jun 19 18:37:34 UTC 2014


A large majority of the failures I've seen OSSG report have been privilege
escalation in each service.. Trusts not scoping down properly, quotas
not being applied, or cross-project/tenant boundaries not being honored.

I don't think we've had many (if any) SQL or shell injection attacks or
buffer overflows or anything like that. We're all pretty well trained to
spot these issues and python makes you have to try pretty hard to
implement some of them.

I say this to suggest that perhaps the appropriate thing to put in the
gate is better coverage of cross-tenant scenarios and policy enforcement.

Excerpts from Travis McPeak's message of 2014-06-19 11:21:24 -0700:
> Hi all,
> 
> In the OpenStack Security Group (OSSG) we¹ve been kicking around the idea
> of getting some simple non-blocking security-related gate tests going.
> These tests would be designed to be simple and automated checks for
> low-hanging fruit such as the use of ŒShell=True¹.  The main goal is to
> have these be as noiseless as possible (a low rate of false positives).
> The hope is that if these are useful and unobtrusive enough, when they
> actually do fail, people will take note.
> 
> We will start off small, with maybe one simple gate test, and expand later
> if it proves to be useful.  We plan to test heavily internally, and then
> start requesting integration into projects later.
> 
> My question is: what is the best tool for the job?  I have heard Pylint
> and Hacking mentioned.  Are there any others?
> 
> Thanks,
>   -Travis
> 
> 
> 
> 
> On 6/19/14, 5:00 AM, "openstack-dev-request at lists.openstack.org"
> <openstack-dev-request at lists.openstack.org> wrote:
> 
> >Send OpenStack-dev mailing list submissions to
> >       openstack-dev at lists.openstack.org
> >
> >To subscribe or unsubscribe via the World Wide Web, visit
> >       http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >or, via email, send a message with subject or body 'help' to
> >       openstack-dev-request at lists.openstack.org
> >
> >You can reach the person managing the list at
> >       openstack-dev-owner at lists.openstack.org
> >
> >When replying, please edit your Subject line so it is more specific
> >than "Re: Contents of OpenStack-dev digest..."
> >
> >
> >Today's Topics:
> >
> >   1. [nova][libvirt] Block migrations and Cinder volumes
> >      (Rafi Khardalian)
> >   2. Re: [neutron]Performance of security group (henry hly)
> >   3. Re: [Neutron][ML2] Modular L2 agent architecture (henry hly)
> >   4. Re: [heat] agenda for OpenStack Heat meeting 2014-06-18 20:00
> >      UTC - corrections to meeting minutes (Mike Spreitzer)
> >   5. Re: [Neutron][LBaaS] TLS support RST document    on      Gerrit
> >      (Samuel Bercovici)
> >   6. [DevStack] fails in lxc running ubuntu trusty amd64
> >      (Mike Spreitzer)
> >   7. Re: [neutron]Performance of security group (?douard Thuleau)
> >   8. Re: [nova] A modest proposal to reduce reviewer load
> >      (Mark McLoughlin)
> >   9. Re: [DevStack] fails in lxc running ubuntu trusty        amd64
> >      (J?r?me Gallard)
> >  10. Re: [Horizon] Quick Survey: Horizon Mid-Cycle Meetup
> >      (Matthias Runge)
> >  11. Re: [nova] A modest proposal to reduce reviewer load
> >      (Matthew Booth)
> >  12. Re: [nova][libvirt] Block migrations and Cinder volumes
> >      (Daniel P. Berrange)
> >  13. Re: [TripleO] Backwards compatibility policy for our projects
> >      (Giulio Fidente)
> >  14. Re: [nova][libvirt] Block migrations and Cinder  volumes
> >      (Ronen Kat)
> >  15. Re: [Nova] Nominating Ken'ichi Ohmichi for nova-core
> >      (AMIT PRAKASH PANDEY)
> >  16. [nova] Nova API meeting (Kenichi Oomichi)
> >  17. Re: [Neutron][ML2] Modular L2 agent architecture (Zang MingJie)
> >  18. Re: [qa] Clarification of policy for qa-specs around adding
> >      new tests (Kenichi Oomichi)
> >  19 Re: [DevStack] fails in lxc running ubuntu trusty amd64
> >      (Sean Dague)
> >  20. Re: [FUEL] Zabbix in MOS meeting notes (Swann Croiset)
> >  21. Re: [Neutron][L3] Team Meeting Thursday at 1500 UTC
> >      (Paul Michali (pcm))
> >  22. Re: [TripleO] Backwards compatibility policy for our projects
> >      (Duncan Thomas)
> >  23. Re: [FUEL] Zabbix in MOS meeting notes (Alexander Kislitsy)
> >  24. Help in writing new neutron plugin (shiva m)
> >  25. Re: [nova][libvirt] Block migrations and Cinder  volumes
> >      (Duncan Thomas)
> >  26. [oslo] oslo.utils and oslo.local Davanum Srinivas)
> >
> >
> >---------------------------------------------------------------------
> >
> >Message: 1
> >Date: Wed, 18 Jun 2014 23:09:33 -0700
> >From: Rafi Khardalia <rafi at metacloud.com>
> >To: OpenStack Development Mailing List
> >       <openstack-dev at lists.opestack.org>
> >Subject: [openstack-dev] [nova][libvirt] Block migrations and Cinde
> >       volumes
> >Message-ID:
> >       <CAH9GqTzjxh1Zktj-0HDHgcWhKoOOmGQsYnrx_gdLDpeTRmAng at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >I am concerned about hw block migration functions when Cinder volumes are
> >attached to an instance being migrted.  We noticed some unexpected
> >behavior recently, whereby attached generic NFSbased volumes would become
> >entirely unsparse over the course of a migration.  After spending some
> >time
> >reviewing the code paths in Nova, I'm more concerned that this was
> >actually
> >a minor symtom of a much more significant issue.
> >
> >For those unfamiliar, NFS-based volumes are siply RAW files residing on
> >an
> >NFS mount.  From Libvirt's perspective, these volume look no different
> >than root or ephemeral disks.  We are currently not filtering out volums
> >whatsoever when making the request into Libvirt to perform the migration.
> > Libvir simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
> >when a block migration is requested, which applied to the entire migration
> >process, not differentiated on a per-disk basis.  Numerous guards within
> >Nova to prevent a block based migration from being allowed if the instance
> >disks exist on the destination; yet volumes remain attached and within the
> >defined XML during a block migration.
> >
> >Unless Libvirt has a lot more logic around this thanI am lead to believe,
> >this seems like a recipe for corruption.  It seems as though thi would
> >also impact any type of volume attached to an instance (iSCSI, RBD, etc.)
> >NFS just happens to be what we were testing.  If I am wrong and someone
> >can
> >correct my understanding, I would really appreciate it.  Otherwise, I'm
> >surprised we haven't had more reports of isues when block migrations are
> >used in conjunction with any attached volumes.
> >
> >I hav ideas on how we can address the issue if we can reach some
> >consensus
> >that theissue is valid, but we'll discuss those when if/when we get to
> >that point.
> >
> >Regards,
> >Rafi
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140618/4
> >f4b7709/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 2
> >Date: Thu, 19 Jun 2014 14:25:34 +0800
> >From: henry hly <henry4hly at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [neutron]Performance of security group
> >Message-ID:
> >       <CAFHmYs_UGkTj2YifXs_pe85XRa3JAf9GraE0cdw-cVY5_M3uBg at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >we have done some tests, but have different result: the performance is
> >nearly the same for empty and 5k rules in iptable, but huge gap between
> >enable/disable iptable hook on linux bridge
> >
> >
> >On Thu, Jun 19, 2014 at 11:21 AM, shihanzhan <ayshihanzhang at 126.com>
> >wrote:
> >
> >> Now I have not get accurate test data, but I  can confirm the following
> >> points?
> >> 1. In compute node, the iptable's chain of a VM is liner, iptable filter
> >> it one by one, if a VM in default security group and this default
> >>security
> >> group have many members, but ipset chain is set, the time ipset filter
> >>one
> >> and many member is not much difference.
> >> 2. when the iptable rule is very large, the probability of  failure
> >>that  iptable-save
> >> save the iptable rule  is very large.
> >>
> >>
> >>
> >>
> >>
> >> At 2014-06-19 10:55:56, "Kevin Benton" <blak111 at gmail.com> wrote:
> >>
> >> This sounds like a good idea to handle some of the performance issues
> >> until the ovs firewall can be implemented down the the line.
> >> Do you have any performance comparisons?
> >> On Jun 18, 2014 7:46 PM, "shihanzhang" <ayshihanzhang at 126.com> wrote:
> >>
> >>> Hello all,
> >>>
> >>> Now in neutron, it use iptable implementing security group, but the
> >>> performance of this  implementation is very poor, there is a bug:
> >>> https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
> >>>problem.
> >>> In his test, with default security groups(which has remote security
> >>> group), beyond 250-300 VMs, there were around 6k Iptable rules on evry
> >>> compute node, although his patch can reduce the prcessing time, but it
> >>> don't solve this problem fundamentally. I have commit a BP to solve
> >>>this
> >>> problem:
> >>> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> >>> <https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security,>
> >>> There are other people interested in this it?
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment as scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/0
> >113cebf/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 3
> >Date: Thu, 19 Jun 2014 14:44:47 +0800
> >From: henry hly <henry4hly at gmail.com>
> >To: "OpenStack Develoment Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> >       architecture
> >Message-ID:
> >       <CAFHmYs8ekniMLOPLXznNn-S7EiJrvyuh3Va8fJqfe7+xK6cRPA at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >OVS agent manipulate not only ovs flow table, but also linux stack, which
> >is not so easily replaced by pure openflow controller today.
> >fastpath-slowpath separation sounds good, but really a nightmare for high
> >concurrent connection application if we set L4 flow into OVS (in our
> >testing, vswitchd daemon always stop working in this case).
> >
> >Someday when OVS can do all the L2-L4 rules in the kernel without
> >bothering
> >userspace classifier, pure OF controller can replace agent based solution
> >then. OVS hooking to netfilter conntrack may come this year, but not
> >enough
> >yet.
> >
> >
> >On Wed, Jun 18, 2014 at 12:56 AM, ArmandoM. <armamig at gmail.com> wrote:
> >
> >> just a provocative thought: If we used the ovsdb connection instead, do
> >>we
> >> really need an L2 agent :P?
> >>
> >>
> >> On 17 June 2014 18:38, Kyle Mestery <mestery at noironetworks.com> wrote:
> >>
> >>> Another area of improvement for the agent would be to move away from
> >>> executing CLIs for port commands and instead use OVSDB. Terry Wilson
> >>> and I talked about this, and re-writing ovs_lib to use an OVSDB
> >>> connection instead of the CLI methods would b a huge improvement
> >>> here. I'm not sure if Terry was going to move forward with this, but
> >>> I'd be in favor of this for Juno if he or someone else wants to move
> >>> in this direction.
> >>>
> >>> Thanks,
> >>> Kyle
> >>>
> >>> On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
> >>><sorlando at nicira.com>
> >>> wrote:
> >>> > We've started doing this in a slightly more reasonable way for
> >>>icehouse.
> >>> > What we've done is:
> >>> > - remove unnecessary notification from the server
> >>> > - process all port-related events, either trigger via RPC or via
> >>> monitor in
> >>> > one place
> >>> >
> >>> > Obviously there is always a lot of room for improvement, and I agree
> >>> > something along the lines of what Zang suggests would be more
> >>> maintainable
> >>> > and ensure faster event processing as well as making it easier to
> >>>have
> >>> some
> >>> > form of reliability on event processing.
> >>> >
> >>> > I was considering doing something for the ovs-agent again in Juno,
> >>>but
> >>> since
> >>> > we've moving towards a unified agent, I think any new "big" ticket
> >>> should
> >>> > address this effort.
> >>> >
> >>> > Salvatore
> >>> >
> >>> >
> >>> > On 17 June 2014 13:31, Zang MingJie <zealot0630 at gmail.com> wrote:
> >>> >>
> >>> >> Hi:
> >>> >>
> >>> >> Awesome! Currently we are suffering lots of bugs in ovs-agnt, also
> >>> >> intent to rebuild a more stable flexible agent.
> >>> >>
> >>> >> Taking the experience of ovs-agent bugs, I think the concurrency
> >>> >> problem is also a very important problem, the agent gets lots of
> >>>event
> >>> >> from different greenlets, th rpc, the ovs monitor or the main loop.
> >>> >> I'd suggest to serialize all evet to a queue, then process events
> >>>in
> >>> >> a dedicated thread. The thread check the events one by one orered,
> >>> >> and resolve what has been changed, then apply the corresponding
> >>> >> changes. If there is any error occurred in the thread, discard the
> >>> >> current processing event, do a fresh start event, which reset
> >>> >> everything, then apply the correct settings.
> >>> >>
> >>> >> The threading model is so important and may prevent tons of bugs in
> >>> >> the future development, we should describe it clearly in the
> >>> >> architecture
> >>> >>
> >>> >>
> >>> >> On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi <mb at us.ibm.com>
> >>> >> wrote:
> >>> >> > Following the discussions in the ML2 subgroup weekly meetings, I
> >>>have
> >>> >> > added
> >>> >> > more information on the etherpad [1] describing the proposed
> >>> >> > architecture
> >>> >> > for modular L2 agents. I have also posted some code fragments at
> >>>[2]
> >>> >> > sketching the implementation of the proposed architecture. Please
> >>> have a
> >>> >> > look when you get a chance and let us know if you have any
> >>>comments.
> >>> >> >
> >>> >> > [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
> >>> >> > [2] https://review.openstack.org/#/c/99187/
> >>> >> >
> >>> >> >
> >>> >> > _______________________________________________
> >>> >> > OpenStack-dev mailing list
> >>> >> > OpenStack-dev at lists.openstack.org
> >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >> >
> >>> >>
> >>> >> _______________________________________________
> >>> >> OpenStack-dev mailing list
> >>> >> OpenStack-dev at lists.openstack.org
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev at lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-de at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/opentack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/2
> >696588d/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 4
> >Date: Thu, 19 Jun 2014 02:55:39 -0400
> >From: Mike Spreitzer <mspreitz at us.ibm.com>
> >To: "OpenStack Development Mailing List \(not for usage questions\)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [heat] agenda for OpenStack Heat meeting
> >       2014-06-18 20:00 UTC - corrections to meeting minutes
> >Message-ID:
> >       <OF74E36329.3927CB5F-ON85257CFC.0025709E-85257CFC.00260DCC at us.ibm.com>
> >Content-Type: text/plain; charset="us-ascii"
> >
> >Mike Spreitzer/Watson/IBM at IBMUS wrote on 06/18/2014 05:00:57 PM:
> >...
> >>
> >http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-06-18-20.00.ht
> >ml
> >
> >
> >I found two goofups so far.  One is that the following was not recorded
> >in
> >the official outline (#agreed only really works for chairs):
> >
> >20:10:12 <zaneb> #agreed heat-slow job will remain non-voting until
> >current issues are fixed
> >
> >The other concerns the dates for the mid-cycle meet-up in August.  The
> >agreement was on Monday--Wednesday, which are the 18th through the 20th.
> >I
> >incorrectly reorded that as
> >
> >20:35:47 <mspreitz> #agreed have the second meetup, Aug 19--21
> >
> >when in fact the correct statement is
> >
> >20:35:47 <mspreitz> #agreed have the second meetup, Aug 18--20
> >
> >Regards,
> >Mike
> >-------------- next part -------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.orgpipermail/openstack-dev/attachments/20140619/6
> >ce90077/attachment-0001.htm>
> >
> >------------------------------
> >
> >Message: 5
> >Date: Thu, 19 Jun 2014 07:03:00 +0000
> >From: Samuel Bercovici <SamuelB at Radware.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <oenstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document
> >       on      Gerrit
> >Message-ID:
> >       <F36E8145F2571242A675F56CF60964560F0BAB8B at ILMB1.corp.radware.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Hi Stephen,
> >
> >The difference is whether the user get to specify hostnames or whether
> >this get concluded from the certificate.
> I agree, that it might be more friendly if we have an immutable hostname
> >field that get cached in lbaas but being read from the certificate and
> >not mnaged by the end user.
> >
> >-Sam.
> >
> >
> >
> >From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> >Sent: Thursday, June 19, 2014 1:44 AM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> >Gerrit
> >
> >So... what I'm hearing her is that we might want to support both a
> >'hostname' and 'order' attribute. Though exact behavior from vendor to
> >vendor when there is name overlap is not likey to be consistent.
> >
> >Note that while we have seen that corner case, it is unsual... so I'm
> >not against having slightly different behaviorwhen there's name overlap
> >from vendor to vendor.
> >
> >Stephen
> >
> >On Wed, Jun 18, 2014 at 2:15 PM, Samul Bercovici
> ><SamuelB at radware.com<mailto:SamuelB at radware.com>> wrote:
> >Hi Stephen,
> 
> >Radware Alteon extract the hostname information and the alt
> >subjectAltName frm the certificate information.
> >It then do:
> >
> >1.      Find if there is exact match wit the name in the https handshake
> >and the ones extracted from the certificate, if there are more than a
> >single match, the st one in the order will be used
> >
> >2.      If no match was found than try to use the regexp hostname to
> >match, if you have multiple matches, the 1st one will e used
> >
> >3.      If no match was found than try to use subjectAltName to match. If
> >you have multiple matches, the 1st one will be used
> >
> >4.      If no match than use default certificate
> >
> >-Sam.
> >
> >
> >
> >
> >From: Stephen Balukoff
> >[mailto:sbaluoff at bluebox.net<mailto:sbalukoff at bluebox.net>]
> >Sent: Thursday, June 19, 2014 12:03 AM
> >
> >To: OpenStack Development Mailing List (not fr usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RT document on
> >Gerrit
> >
> >Hi Evg,
> >
> >I do not think stunnel supports an "ordered list" without hostnames.
> >Since we're talking about making the reference implementation use stunnel
> >for TLS termination, then this seems like it's imortant to support its
> >behavioral model.
> >
> >It is possible to extract hostnames fom the CN and x509v3 Subject
> >Alternative Names in the certs, but, as has been discussed previously,
> >these can overlap, an's not always reliable to rely on this data from
> >the certs themselves. So, while I have nt having an ordered
> >certificate list, stunnel won't use the order hetunnel will
> >likely have unexpected behavior if hostnames are duplicated.
> >
> >Would  for Radware to simply order the (unique) hostnames
> >alphabetically, and put any wildcard certificates at the end of the list?
> >
> >Also, while I'm loathe to ask for details on a proprietary system: How
> >does Radware do SNI *without* hostnames? Isn't that entirely the point of
> >SNI? Client sends a hostname, and server responds with the certificate
> >that applies to that hostname?
> >
> >Thanks,
> >Stephen
> >
> >On Wed, Jun 18, 2014 at 8:00 AM, Evgeny Fedoruk
> ><EvgenyF at radware.com<mailto:EvgenyF at radware.com>> wrote:
> >Hi Stephen,
> >Regarding your comment related to SNI list management and behavior in the
> >RST document:
> >
> >I understand the need to explicitly specify specific certificates for
> >specific hostnames.
> >However we need to deliver lowest common denominator for this feature
> >which every vendor is able to support
> >In this case, specifying hostname for certificate will not be supported
> >by Radware.
> >The original proposal with ordered certificates list may be the lowest
> >common denominator for all vendors and we should find out if this is the
> >case.
> >If not, managing a simple none-ordered list will probably be the lowest
> >common denominator.
> >
> >With the proposed flavors framework considered, extra SNI management
> >capabilities may be represented for providers
> >but meanwhile we should agree on proposal that can be implemented by all
> >vendors.
> >What are your thought on this?
> >
> >Regarding the SNIPolicy, I agree and will change the document accordingly.
> >
> >Thanks,
> >Evg
> >
> >
> >
> >
> >
> >-----Original Message-----
> >From: Evgeny Fedoruk
> >Sent: Sunday, June 15, 2014 1:55 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> >Gerrit
> >
> >Hi All,
> >
> >The document was updated and ready for next review round.
> >Main things that were changed:
> >1. Comments were addressed
> >2. No back-end re-encryption supported
> >3. Intermediate certificates chain supported
> >        *Opened question: Should chain be stored in same TLS container of
> >the certificate?
> >
> >Please review
> >Regards,
> >Evgeny
> >
> >
> >-----Original Message-----
> >From: Douglas Mendizabal
> >[mailto:douglas.mendizabal at RACKSPACE.COM<mailto:douglas.mendizabal at RACKSPA
> >CE.COM>]
> >Sent: Wednesday, June 11, 2014 10:22 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on
> >Gerrit
> >
> >Hi Doug,
> >
> >
> >Barbican does guarantee the integrity and availability of the secret,
> >unless the owner of the secret deletes it from Barbican.  We?re not
> >encouraging that you store a shadow-copy of the secret either.  This was
> >proposed by the LBaaS team as a possible workaround for your use case.
> >Our recommendation was that there are two options for dealing with
> >Secrets being deleted from under you:
> >
> >If you want to control the lifecycle of the secret so that you can
> >prevent the user from deleting the secret, then the secret should be
> >owned by LBaaS, not by the user.  You can achieve this by asking the user
> >to upload the secret via LBaaS api, and then use Barbican on the back end
> >to store the secret under the LBaaS tenant.
> >
> >If you want the user to own and manage their secret in Barbican, then you
> >have to deal with the situation where the user deletes a secret and it is
> >no longer available to LBaaS.  This is a situation you would have to deal
> >with even with a reference-counting and force-deleting Barbican, so I
> >don?t think you really gain anything from all the complexity you?re
> >proposing to add to Barbican.
> >
> >-Douglas M.
> >
> >
> >
> >On 6/11/14, 12:57 PM, "Doug Wiegley"
> ><dougw at a10networks.com<mailto:dougw at a10networks.com>> wrote:
> >
> >>There are other fundamental things about secrets, like relying on their
> >>presence, and not encouraging a proliferation of a dozen
> >>mini-secret-stores everywhere to get around that fact, which makes it
> >>less secret.  Have you considered a ?force? delete flag, required if
> >>some service is using the secret, sort of ?rm? vs ?rm -f?, to avoid the
> >>obvious foot-shooting use cases, but still allowing the user to nuke it
> >>if necessary?
> >>
> >>Thanks,
> >>Doug
> >>
> >>
> >>On 6/11/14, 11:43 AM, "Clark, Robert Graham"
> >><robert.clark at hp.com<mailto:robert.clark at hp.com>> wrote:
> >>
> >>>Users have to be able to delete their secrets from Barbican, it's a
> >>>fundamental key-management requirement.
> >>>
> >>>> -----Original Message-----
> >>>> From: Eichberger, German
> >>>> Sent: 11 June 2014 17:43
> >>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST
> >>>> document on Gerrit
> >>>>
> >>>> Sorry, I am late to the party. Holding the shadow copy in the
> >>>> backend
> >>>is a
> >>>> fine solution.
> >>>>
> >>>> Also, if containers are immutable can they be deleted at all? Can we
> >>>make a
> >>>> requirement that a user can't delete a container in Barbican?
> >>>>
> >>>> German
> >>>>
> >>>> -----Original Message-----
> >>>> From: Eichberger, German
> >>>> Sent: Wednesday, June 11, 2014 9:32 AM
> >>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST
> >>>> document on Gerrit
> >>>>
> >>>> Hi,
> >>>>
> >>>> I think the previous solution is easier for a user to understand.
> >>>> The referenced container got tampered/deleted we throw an error -
> >>>> but keep existing load balancers intact.
> >>>>
> >>>> With the shadow container we get additional complexity and the user
> >>>might
> >>>> be confused where the values are coming from.
> >>>>
> >>>> German
> >>>>
> >>>> -----Original Message-----
> >>>> From: Carlos Garza
> >>>>[mailto:carlos.garza at rackspace.com<mailto:carlos.garza at rackspace.com>]
> >>>> Sent: Tuesday, June 10, 2014 12:18 PM
> >>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST
> >>>> document on Gerrit
> >>>>
> >>>> See adams message re: Re: [openstack-dev] [Neutron][LBaaS] Barbican
> >>>> Neutron LBaaS Integration Ideas.
> >>>> He's advocating keeping a shadow copy of the private key that is
> >>>> owned
> >>>by
> >>>> the LBaaS service so that incase a key is tampered with during an LB
> >>>update
> >>>> migration etc we can still check with the shadow backup and compare
> >>>> it
> >>>to
> >>>> the user owned TLS container in case its not their it can be used.
> >>>>
> >>>> On Jun 10, 2014, at 12:47 PM, Samuel Bercovici
> >>>><SamuelB at Radware.com<mailto:SamuelB at Radware.com>>
> >>>>  wrote:
> >>>>
> >>>> > To elaborate on the case where containers get deleted while LBaaS
> >>>still
> >>>> references it.
> >>>> > We think that the following approach will do:
> >>>> > *         The end user can delete a container and leave a "dangling"
> >>>> reference in LBaaS.
> >>>> > *         It would be nice to allow adding meta data on the
> >>>container so that
> >>>> the user will be aware which listeners use this container. This is
> >>>optional. It
> >>>> can also be optional for LBaaS to implement adding the listeners ID
> >>>> automatically into this metadata just for information.
> >>>> > *         In LBaaS, if an update happens which requires to pull the
> >>>container
> >>>> from Barbican and if the ID references a non-existing container, the
> >>>update
> >>>> will fail and will indicate that the reference certificate does not
> >>>exists any
> >>>> more. This validation could be implemented on the LBaaS API itself
> >>>> as
> >>>well
> >>>> as also by the driver who will actually need the container.
> >>>> >
> >>>> > Regards,
> >>>> >                 -Sam.
> >>>> >
> >>>> >
> >>>> > From: Evgeny Fedoruk
> >>>> > Sent: Tuesday, June 10, 2014 2:13 PM
> >>>> > To: OpenStack Development Mailing List (not for usage questions)
> >>>> > Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST
> >>>document
> >>>> > on Gerrit
> >>>> >
> >>>> > Hi All,
> >>>> >
> >>>> > Carlos, Vivek, German, thanks for reviewing the RST doc.
> >>>> > There are some issues I want to pinpoint final decision on them
> >>>here, in
> >>>> ML, before writing it down in the doc.
> >>>> > Other issues will be commented on the document itself.
> >>>> >
> >>>> > 1.       Support/No support in JUNO
> >>>> > Referring to summit's etherpad
> >>>> https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,
> >>>> > a.       SNI certificates list was decided to be supported. Was
> >>>decision made
> >>>> not to support it?
> >>>> > Single certificate with multiple domains can only partly address
> >>>> > the need for SNI, still, different applications on back-end will
> >>>> > need
> >>>different
> >>>> certificates.
> >>>> > b.      Back-end re-encryption was decided to be supported. Was
> >>>decision
> >>>> made not to support it?
> >>>> > c.       With front-end client authentication and back-end server
> >>>> authentication not supported,
> >>>> > Should certificate chains be supported?
> >>>> > 2.       Barbican TLS containers
> >>>> > a.       TLS containers are immutable.
> >>>> > b.      TLS container is allowed to be deleted, always.
> >>>> >                                                                i.
> >>>Even when it is used by LBaaS VIP
> >>>> listener (or other service).
> >>>> >                                                              ii.
> >>>Meta data on TLS container will
> >>>> help tenant to understand that container is in use by LBaaS
> >>>service/VIP
> >>>> listener
> >>>> >                                                             iii.
> >>>If every VIP listener will "register"
> >>>> itself in meta-data while retrieving container, how that
> >>>"registration" will be
> >>>> removed when VIP listener stops using the certificate?
> >>>> >
> >>>> > Please comment on these points and review the document on gerrit
> >>>> > (https://review.openstack.org/#/c/98640)
> >>>> > I will update the document with decisions on above topics.
> >>>> >
> >>>> > Thank you!
> >>>> > Evgeny
> >>>> >
> >>>> >
> >>>> > From: Evgeny Fedoruk
> >>>> > Sent: Monday, June 09, 2014 2:54 PM
> >>>> > To: OpenStack Development Mailing List (not for usage questions)
> >>>> > Subject: [openstack-dev] [Neutron][LBaaS] TLS support RST document
> >>>on
> >>>> > Gerrit
> >>>> >
> >>>> > Hi All,
> >>>> >
> >>>> > A Spec. RST  document for LBaaS TLS support was added to Gerrit
> >>>> > for review
> >>>> > https://review.openstack.org/#/c/98640
> >>>> >
> >>>> > You are welcome to start commenting it for any open discussions.
> >>>> > I tried to address each aspect being discussed, please add
> >>>> > comments
> >>>> about missing things.
> >>>> >
> >>>> > Thanks,
> >>>> > Evgeny
> >>>> >
> >>>> > _______________________________________________
> >>>> > OpenStack-dev mailing list
> >>>> >
> >>>>OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.
> >>>>org>
> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>>
> >>>>OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.
> >>>>org>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>>
> >>>>OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.
> >>>>org>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>>
> >>>>OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.
> >>>>org>
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>_______________________________________________
> >>OpenStack-dev mailing list
> >>OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.or
> >>g>
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >_______________________________________________
> >OpenStack-dev mailing list
> >OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org
> >>
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >_______________________________________________
> >OpenStack-dev mailing list
> >OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org
> >>
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >--
> >Stephen Balukoff
> >Blue Box Group, LLC
> >(800)613-4305 x807<tel:%28800%29613-4305%20x807>
> >
> >_______________________________________________
> >OpenStack-dev mailing list
> >OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org
> >>
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >--
> >Stephen Balukoff
> >Blue Box Group, LLC
> >(800)613-4305 x807
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/d
> >ed36643/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 6
> >Date: Thu, 19 Jun 2014 03:03:19 -0400
> >From: Mike Spreitzer <mspreitz at us.ibm.com>
> >To: "OpenStack Development Mailing List"
> >       <openstack-dev at lists.openstack.org>
> >Subject: [openstack-dev] [DevStack] fails in lxc running ubuntu trusty
> >       amd64
> >Message-ID:
> >       <OFA53642E6.436B1766-ON85257CFC.00262582-85257CFC.0026C147 at us.ibm.com>
> >Content-Type: text/plain; charset="us-ascii"
> >
> >In my linux containers running Ubuntu 14.04 64-bit, DevStack fails
> >because
> >it can not install the package named tgt.  The problem is that the
> >install
> >script invokes the tgt service's start operation, which launches the
> >daemon (tgtd), and the launch fails with troubles with RDMA.  Has anybody
> >tried such a thing?  Any fixes, workarounds?  Any ideas?
> >
> >Thanks,
> >Mike
> >
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/c
> >ffb3b09/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 7
> >Date: Thu, 19 Jun 2014 09:11:12 +0200
> >From: ?douard Thuleau <thuleau at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [neutron]Performance of security group
> >Message-ID:
> >       <CAKRF=eciO6YVdGmkZgOPZuC9C+j_a_dL9EVSuyg0NmKfDaf+jA at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> >It also based on the rule set mechanism.
> >The issue in that proposition, it's only stable since the begin of the
> >year
> >and on Linux kernel 3.13.
> >But there lot of pros I don't list here (leverage iptables limitation,
> >efficient update rule, rule set, standardization of netfilter
> >commands...).
> >
> >?douard.
> >
> >
> >On Thu, Jun 19, 2014 at 8:25 AM, henry hly <henry4hly at gmail.com> wrote:
> >
> >> we have done some tests, but have different result: the performance is
> >> nearly the same for empty and 5k rules in iptable, but huge gap between
> >> enable/disable iptable hook on linux bridge
> >>
> >>
> >> On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang <ayshihanzhang at 126.com>
> >> wrote:
> >>
> >>> Now I have not get accurate test data, but I  can confirm the following
> >>> points?
> >>> 1. In compute node, the iptable's chain of a VM is liner, iptable
> >>>filter
> >>> it one by one, if a VM in default security group and this default
> >>>security
> >>> group have many members, but ipset chain is set, the time ipset filter
> >>>one
> >>> and many member is not much difference.
> >>> 2. when the iptable rule is very large, the probability of  failure
> >>>that  iptable-save
> >>> save the iptable rule  is very large.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> At 2014-06-19 10:55:56, "Kevin Benton" <blak111 at gmail.com> wrote:
> >>>
> >>> This sounds like a good idea to handle some of the performance issues
> >>> until the ovs firewall can be implemented down the the line.
> >>> Do you have any performance comparisons?
> >>> On Jun 18, 2014 7:46 PM, "shihanzhang" <ayshihanzhang at 126.com> wrote:
> >>>
> >>>> Hello all,
> >>>>
> >>>> Now in neutron, it use iptable implementing security group, but the
> >>>> performance of this  implementation is very poor, there is a bug:
> >>>> https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
> >>>> problem. In his test, with default security groups(which has remote
> >>>> security group), beyond 250-300 VMs, there were around 6k Iptable
> >>>>rules on
> >>>> evry compute node, although his patch can reduce the processing time,
> >>>>but
> >>>> it don't solve this problem fundamentally. I have commit a BP to solve
> >>>> this problem:
> >>>> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> >>>>
> >>>><https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security,>
> >>>> There are other people interested in this it?
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev at lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/a
> >f7685ae/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 8
> >Date: Thu, 19 Jun 2014 08:32:25 +0100
> >From: Mark McLoughlin <markmc at redhat.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: Re: [openstack-dev] [nova] A modest proposal to reduce
> >       reviewer load
> >Message-ID: <1403163145.3371.38.camel at sorcha>
> >Content-Type: text/plain; charset="UTF-8"
> >
> >Hi Armando,
> >
> >On Tue, 2014-06-17 at 14:51 +0200, Armando M. wrote:
> >> I wonder what the turnaround of trivial patches actually is, I bet you
> >> it's very very small, and as Daniel said, the human burden is rather
> >> minimal (I would be more concerned about slowing them down in the
> >> gate, but I digress).
> >
> >>
> >> I think that introducing a two-tier level for patch approval can only
> >> mitigate the problem, but I wonder if we'd need to go a lot further,
> >> and rather figure out a way to borrow concepts from queueing theory so
> >> that they can be applied in the context of Gerrit. For instance
> >> Little's law [1] says:
> >>
> >> "The long-term average number of customers (in this context reviews)
> >> in a stable system L is equal to the long-term average effective
> >> arrival rate, ?, multiplied by the average time a customer spends in
> >> the system, W; or expressed algebraically: L = ?W."
> >>
> >> L can be used to determine the number of core reviewers that a project
> >> will need at any given time, in order to meet a certain arrival rate
> >> and average time spent in the queue. If the number of core reviewers
> >> is a lot less than L then that core team is understaffed and will need
> >> to increase.
> >>
> >> If we figured out how to model and measure Gerrit as a queuing system,
> >> then we could improve its performance a lot more effectively; for
> >> instance, this idea of privileging trivial patches over longer patches
> >> has roots in a popular scheduling policy [3] for  M/G/1 queues, but
> >> that does not really help aging of 'longer service time' patches and
> >> does not have a preemption mechanism built-in to avoid starvation.
> >>
> >> Just a crazy opinion...
> >> Armando
> >>
> >> [1] - http://en.wikipedia.org/wiki/Little's_law
> >> [2] - http://en.wikipedia.org/wiki/Shortest_job_first
> >> [3] - http://en.wikipedia.org/wiki/M/G/1_queue
> >
> >This isn't crazy at all. We do have a problem that surely could be
> >studied and solved/improved by applying queueing theory or lessons from
> >fields like lean manufacturing. Right now, we're simply applying our
> >intuition and the little I've read about these sorts of problems is that
> >your intuition can easily take you down the wrong path.
> >
> >There's a bunch of things that occur just glancing through those
> >articles:
> >
> >  - Do we have an unstable system? Would it be useful to have arrival
> >    and exit rate metrics to help highlight this? Over what time period
> >    would those rates need to be averaged to be useful? Daily, weekly,
> >    monthly, an entire release cycle?
> >
> >  - What are we trying to optimize for? The length of time in the
> >    queue? The number of patches waiting in the queue? The response
> >    time to a new patch revision?
> >
> >  - We have a single queue, with a bunch of service nodes with a wide
> >    variance between their service rates, very little in the way of
> >    scheduling policy, a huge rate of service nodes sending jobs back
> >    for rework, a cost associated with maintaining a job while it sits
> >    in the queue, the tendency for some jobs to disrupt many other jobs
> >    with merge conflicts ... not simple.
> >
> >  - Is there any sort of natural limit in our queue size that makes the
> >    system stable - e.g. do people naturally just stop submitting
> >    patches at some point?
> >
> >My intuition on all of this lately is that we need some way to model and
> >experiment with this queue, and I think we could make some interesting
> >progress if we could turn it into a queueing network rather than a
> >single, extremely complex queue.
> >
> >Say we had a front-end for gerrit which tracked which queue a patch is
> >in, we could experiment with things like:
> >
> >  - a triage queue, with non-cores signed up as triagers looking for
> >    obvious mistakes and choosing the next queue for a patch to enter
> >    into
> >
> >  - queues having a small number of cores signed up as owners - e.g.
> >    high priority bugfix, API, scheduler, object conversion, libvirt
> >    driver, vmware driver, etc.
> >
> >  - we'd allow for a large number of queues so that cores could aim for
> >    an "inbox zero" approach on individual queues, something that would
> >    probably help keep cores motivated.
> >
> >  - we could apply different scheduling policies to each of the
> >    different queues - i.e. explicit guidance for cores about which
> >    patches they should pick off the queue next.
> >
> >  - we could track metrics on individual queues as well as the whole
> >    network, identifying bottlenecks and properly recognizing which
> >    reviewers are doing a small number of difficult reviews versus
> >    those doing a high number of trivial reviews.
> >
> >  - we could require some queues to feed into a final approval queue
> >    where some people are responsible for giving an approved patch a
> >    final sanity check - i.e. there would be a class of reviewer with
> >    good instincts who quickly churn through already-reviewed patches
> >    looking for the kind of mistakes people tend to mistake when
> >    they're down in the weeds.
> >
> >  - explicit queues for large, cross-cutting changes like coding style
> >    changes. Perhaps we could stop servicing these queues at certain
> >    points in the cycles, or reduce the rate at which they are
> >    serviced.
> >
> >  - we could include specs and client patches in the same network so
> >    that they prioritized in the same way.
> >
> >Lots of ideas, none of it is trivial ... but perhaps it'll spark
> >someone's interest :)
> >
> >Mark.
> >
> >
> >
> >
> >------------------------------
> >
> >Message: 9
> >Date: Thu, 19 Jun 2014 09:55:10 +0200
> >From: J?r?me Gallard <gallard.jerome at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [DevStack] fails in lxc running ubuntu
> >       trusty  amd64
> >Message-ID:
> >       <CALkfQzDGe1hbWxjTeNiFkNtRvCrFsFCEnRWZ6sp+HKrjhXQwxg at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Hi Mike,
> >
> >We worked with Devstack and LXC and got the same issue (
> >https://blueprints.launchpad.net/devstack/+spec/lxc-computes ).
> >
> >The issue seems to be linked with namespace:
> >https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00839.
> >html
> >https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
> >
> >Hope it helps,
> >J?r?me
> >
> >
> >
> >2014-06-19 9:03 GMT+02:00 Mike Spreitzer <mspreitz at us.ibm.com>:
> >
> >> In my linux containers running Ubuntu 14.04 64-bit, DevStack fails
> >>because
> >> it can not install the package named tgt.  The problem is that the
> >>install
> >> script invokes the tgt service's start operation, which launches the
> >>daemon
> >> (tgtd), and the launch fails with troubles with RDMA.  Has anybody tried
> >> such a thing?  Any fixes, workarounds?  Any ideas?
> >>
> >> Thanks,
> >> Mike
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/3
> >e13cd1c/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 10
> >Date: Thu, 19 Jun 2014 09:58:15 +0200
> >From: Matthias Runge <mrunge at redhat.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: Re: [openstack-dev] [Horizon] Quick Survey: Horizon Mid-Cycle
> >       Meetup
> >Message-ID: <20140619075815.GE3896 at turing.berg.ol>
> >Content-Type: text/plain; charset=us-ascii
> >
> >On Wed, Jun 18, 2014 at 10:55:59AM +0200, Jaromir Coufal wrote:
> >> My quick questions are:
> >> * Who would be interested (and able) to get to the meeting?
> >> * What topics do we want to discuss?
> >>
> >> https://etherpad.openstack.org/p/horizon-juno-meetup
> >>
> >Thanks for bringing this up!
> >
> >Do we really have items to discuss, where it needs a meeting in person?
> >
> >Matthias
> >--
> >Matthias Runge <mrunge at redhat.com>
> >
> >
> >
> >------------------------------
> >
> >Message: 11
> >Date: Thu, 19 Jun 2014 09:34:31 +0100
> >From: Matthew Booth <mbooth at redhat.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: Re: [openstack-dev] [nova] A modest proposal to reduce
> >       reviewer load
> >Message-ID: <53A2A097.3090309 at redhat.com>
> >Content-Type: text/plain; charset=UTF-8
> >
> >On 19/06/14 08:32, Mark McLoughlin wrote:
> >> Hi Armando,
> >>
> >> On Tue, 2014-06-17 at 14:51 +0200, Armando M. wrote:
> >>> I wonder what the turnaround of trivial patches actually is, I bet you
> >>> it's very very small, and as Daniel said, the human burden is rather
> >>> minimal (I would be more concerned about slowing them down in the
> >>> gate, but I digress).
> >>
> >>>
> >>> I think that introducing a two-tier level for patch approval can only
> >>> mitigate the problem, but I wonder if we'd need to go a lot further,
> >>> and rather figure out a way to borrow concepts from queueing theory so
> >>> that they can be applied in the context of Gerrit. For instance
> >>> Little's law [1] says:
> >>>
> >>> "The long-term average number of customers (in this context reviews)
> >>> in a stable system L is equal to the long-term average effective
> >>> arrival rate, ?, multiplied by the average time a customer spends in
> >>> the system, W; or expressed algebraically: L = ?W."
> >>>
> >>> L can be used to determine the number of core reviewers that a project
> >>> will need at any given time, in order to meet a certain arrival rate
> >>> and average time spent in the queue. If the number of core reviewers
> >>> is a lot less than L then that core team is understaffed and will need
> >>> to increase.
> >>>
> >>> If we figured out how to model and measure Gerrit as a queuing system,
> >>> then we could improve its performance a lot more effectively; for
> >>> instance, this idea of privileging trivial patches over longer patches
> >>> has roots in a popular scheduling policy [3] for  M/G/1 queues, but
> >>> that does not really help aging of 'longer service time' patches and
> >>> does not have a preemption mechanism built-in to avoid starvation.
> >>>
> >>> Just a crazy opinion...
> >>> Armando
> >>>
> >>> [1] - http://en.wikipedia.org/wiki/Little's_law
> >>> [2] - http://en.wikipedia.org/wiki/Shortest_job_first
> >>> [3] - http://en.wikipedia.org/wiki/M/G/1_queue
> >>
> >> This isn't crazy at all. We do have a problem that surely could be
> >> studied and solved/improved by applying queueing theory or lessons from
> >> fields like lean manufacturing. Right now, we're simply applying our
> >> intuition and the little I've read about these sorts of problems is that
> >> your intuition can easily take you down the wrong path.
> >>
> >> There's a bunch of things that occur just glancing through those
> >> articles:
> >>
> >>   - Do we have an unstable system? Would it be useful to have arrival
> >>     and exit rate metrics to help highlight this? Over what time period
> >>     would those rates need to be averaged to be useful? Daily, weekly,
> >>     monthly, an entire release cycle?
> >>
> >>   - What are we trying to optimize for? The length of time in the
> >>     queue? The number of patches waiting in the queue? The response
> >>     time to a new patch revision?
> >>
> >>   - We have a single queue, with a bunch of service nodes with a wide
> >>     variance between their service rates, very little in the way of
> >>     scheduling policy, a huge rate of service nodes sending jobs back
> >>     for rework, a cost associated with maintaining a job while it sits
> >>     in the queue, the tendency for some jobs to disrupt many other jobs
> >>     with merge conflicts ... not simple.
> >>
> >>   - Is there any sort of natural limit in our queue size that makes the
> >>     system stable - e.g. do people naturally just stop submitting
> >>     patches at some point?
> >>
> >> My intuition on all of this lately is that we need some way to model and
> >> experiment with this queue, and I think we could make some interesting
> >> progress if we could turn it into a queueing network rather than a
> >> single, extremely complex queue.
> >>
> >> Say we had a front-end for gerrit which tracked which queue a patch is
> >> in, we could experiment with things like:
> >>
> >>   - a triage queue, with non-cores signed up as triagers looking for
> >>     obvious mistakes and choosing the next queue for a patch to enter
> >>     into
> >>
> >>   - queues having a small number of cores signed up as owners - e.g.
> >>     high priority bugfix, API, scheduler, object conversion, libvirt
> >>     driver, vmware driver, etc.
> >>
> >>   - we'd allow for a large number of queues so that cores could aim for
> >>     an "inbox zero" approach on individual queues, something that would
> >>     probably help keep cores motivated.
> >>
> >>   - we could apply different scheduling policies to each of the
> >>     different queues - i.e. explicit guidance for cores about which
> >>     patches they should pick off the queue next.
> >>
> >>   - we could track metrics on individual queues as well as the whole
> >>     network, identifying bottlenecks and properly recognizing which
> >>     reviewers are doing a small number of difficult reviews versus
> >>     those doing a high number of trivial reviews.
> >>
> >>   - we could require some queues to feed into a final approval queue
> >>     where some people are responsible for giving an approved patch a
> >>     final sanity check - i.e. there would be a class of reviewer with
> >>     good instincts who quickly churn through already-reviewed patches
> >>     looking for the kind of mistakes people tend to mistake when
> >>     they're down in the weeds.
> >>
> >>   - explicit queues for large, cross-cutting changes like coding style
> >>     changes. Perhaps we could stop servicing these queues at certain
> >>     points in the cycles, or reduce the rate at which they are
> >>     serviced.
> >>
> >>   - we could include specs and client patches in the same network so
> >>     that they prioritized in the same way.
> >>
> >> Lots of ideas, none of it is trivial ... but perhaps it'll spark
> >> someone's interest :)
> >
> >This is all good stuff, but by the sounds of it experimenting in gerrit
> >isn't likely to be simple.
> >
> >Remember, though, that the relevant metric is code quality, not review
> >rate.
> >
> >Matt
> >--
> >Matthew Booth
> >Red Hat Engineering, Virtualisation Team
> >
> >Phone: +442070094448 (UK)
> >GPG ID:  D33C3490
> >GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
> >
> >
> >
> >------------------------------
> >
> >Message: 12
> >Date: Thu, 19 Jun 2014 09:38:26 +0100
> >From: "Daniel P. Berrange" <berrange at redhat.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [nova][libvirt] Block migrations and
> >       Cinder volumes
> >Message-ID: <20140619083826.GA6545 at redhat.com>
> >Content-Type: text/plain; charset=utf-8
> >
> >On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
> >> I am concerned about how block migration functions when Cinder volumes
> >>are
> >> attached to an instance being migrated.  We noticed some unexpected
> >> behavior recently, whereby attached generic NFS-based volumes would
> >>become
> >> entirely unsparse over the course of a migration.  After spending some
> >>time
> >> reviewing the code paths in Nova, I'm more concerned that this was
> >>actually
> >> a minor symptom of a much more significant issue.
> >>
> >> For those unfamiliar, NFS-based volumes are simply RAW files residing
> >>on an
> >> NFS mount.  From Libvirt's perspective, these volumes look no different
> >> than root or ephemeral disks.  We are currently not filtering out
> >>volumes
> >> whatsoever when making the request into Libvirt to perform the
> >>migration.
> >>  Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
> >> when a block migration is requested, which applied to the entire
> >>migration
> >> process, not differentiated on a per-disk basis.  Numerous guards within
> >> Nova to prevent a block based migration from being allowed if the
> >>instance
> >> disks exist on the destination; yet volumes remain attached and within
> >>the
> >> defined XML during a block migration.
> >>
> >> Unless Libvirt has a lot more logic around this than I am lead to
> >>believe,
> >> this seems like a recipe for corruption.  It seems as though this would
> >> also impact any type of volume attached to an instance (iSCSI, RBD,
> >>etc.),
> >> NFS just happens to be what we were testing.  If I am wrong and someone
> >>can
> >> correct my understanding, I would really appreciate it.  Otherwise, I'm
> >> surprised we haven't had more reports of issues when block migrations
> >>are
> >> used in conjunction with any attached volumes.
> >
> >Libvirt/QEMU has no special logic. When told to block-migrate, it will do
> >so for *all* disks attached to the VM in read-write-exclusive mode. It
> >will
> >only skip those marked read-only or read-write-shared mode. Even that
> >distinction is somewhat dubious and so not reliably what you would want.
> >
> >It seems like we should just disallow block migrate when any cinder
> >volumes
> >are attached to the VM, since there is never any valid use case for doing
> >block migrate from a cinder volume to itself.
> >
> >Regards,
> >Daniel
> >--
> >|: http://berrange.com      -o-
> >http://www.flickr.com/photos/dberrange/ :|
> >|: http://libvirt.org              -o-
> >http://virt-manager.org :|
> >|: http://autobuild.org       -o-
> >http://search.cpan.org/~danberr/ :|
> >|: http://entangle-photo.org       -o-
> >http://live.gnome.org/gtk-vnc :|
> >
> >
> >
> >------------------------------
> >
> >Message: 13
> >Date: Thu, 19 Jun 2014 11:01:19 +0200
> >From: Giulio Fidente <gfidente at redhat.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [TripleO] Backwards compatibility policy
> >       for our projects
> >Message-ID: <53A2A6DF.5010606 at redhat.com>
> >Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> >
> >On 06/16/2014 10:06 PM, James Slagle wrote:
> >> On Mon, Jun 16, 2014 at 12:19 PM, Tomas Sedovic <tsedovic at redhat.com>
> >>wrote:
> >
> >>> If we do promise backwards compatibility, we should document it
> >>> somewhere and if we don't we should probably make that more visible,
> >>> too, so people know what to expect.
> >>>
> >>> I prefer the latter, because it will make the merge.py cleanup easier
> >>> and every published bit of information I could find suggests that's our
> >>> current stance anyway.
> >
> >> Much of this is the reason why I pushed for the stable branches that
> >> we cut for icehouse. I'm not sure what "downstreams that have shipped
> >> things" are being referred to, but perhaps those needs could be served
> >> by the stable/icehouse branches that exist today?  I know at least for
> >> the RDO downstream, the packages are being built off of releases done
> >> from the stable branches. So, honestly, I'm not that concerned about
> >> your proposed changes to rip stuff out without any deprecation from
> >> that point of view :).
> >
> >+1 on relying on the branches
> >
> >I personally don't see the backward compatibility much of an issue in
> >TripleO (but I may miss some pieces though) ... more below
> >
> >> That being said, just because TripleO has taken the stance that
> >> backwards compatibility is not guaranteed, I agree with some of the
> >> other sentiments in this thread: that we should at least try if there
> >> are easy things we can do.
> >
> > From a the 10.000 feet view, I can imagine people relying on a stable
> >API for services like Cinder but I don't see how that applies to TripleO
> >
> >Why should one try to install an older version of OpenStack using some
> >'recent' version TripleO?
> >
> >The only scenario which comes up to my mind is the 'upgrade' process
> >where undercloud and overcloud could get 'out of sync', yet this seems
> >to have a pretty limited scope.
> >
> >A genuine question from a 'wannabe' TripleO contributor, do you see
> >others? many others?
> >--
> >Giulio Fidente
> >GPG KEY: 08D733BA
> >
> >
> >
> >------------------------------
> >
> >Message: 14
> >Date: Thu, 19 Jun 2014 13:06:41 +0300
> >From: Ronen Kat <RONENKAT at il.ibm.com>
> >To: "OpenStack Development Mailing List \(not for usage questions\)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [nova][libvirt] Block migrations and
> >       Cinder  volumes
> >Message-ID:
> >       <OFF787EA7D.FFEEE06B-ONC2257CFC.00370FAE-C2257CFC.00378B6E at il.ibm.com>
> >Content-Type: text/plain; charset="us-ascii"
> >
> >The use-case for block migration in Libvirt/QEMU is to allow migration
> >between two different back-ends.
> >This is basically a host based volume migration, ESXi has a similar
> >functionality (storage vMotion), but probably not enabled with OpenStack.
> >Btw, if the Cinder volume driver can migrate the volume by itself, the
> >Libvirt/QEMU is not called upon, but if it can't (different vendor boxes
> >don't talk to each other), then Cinder asks Nova to help move the data...
> >
> >If you are missing this host based process you are basically have a "data
> >lock-in" on a specific back-end - the use case could be storage
> >evacuation, or just moving the data to a different box.
> >
> >Ronen,
> >
> >
> >
> >From:   "Daniel P. Berrange" <berrange at redhat.com>
> >To:     "OpenStack Development Mailing List (not for usage questions)"
> ><openstack-dev at lists.openstack.org>,
> >Date:   19/06/2014 11:42 AM
> >Subject:        Re: [openstack-dev] [nova][libvirt] Block migrations and
> >Cinder volumes
> >
> >
> >
> >On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
> >> I am concerned about how block migration functions when Cinder volumes
> >are
> >> attached to an instance being migrated.  We noticed some unexpected
> >> behavior recently, whereby attached generic NFS-based volumes would
> >become
> >> entirely unsparse over the course of a migration.  After spending some
> >time
> >> reviewing the code paths in Nova, I'm more concerned that this was
> >actually
> >> a minor symptom of a much more significant issue.
> >>
> >> For those unfamiliar, NFS-based volumes are simply RAW files residing
> >>on
> >an
> >> NFS mount.  From Libvirt's perspective, these volumes look no different
> >> than root or ephemeral disks.  We are currently not filtering out
> >volumes
> >> whatsoever when making the request into Libvirt to perform the
> >migration.
> >>  Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
> >> when a block migration is requested, which applied to the entire
> >migration
> >> process, not differentiated on a per-disk basis.  Numerous guards within
> >> Nova to prevent a block based migration from being allowed if the
> >instance
> >> disks exist on the destination; yet volumes remain attached and within
> >the
> >> defined XML during a block migration.
> >>
> >> Unless Libvirt has a lot more logic around this than I am lead to
> >believe,
> >> this seems like a recipe for corruption.  It seems as though this would
> >> also impact any type of volume attached to an instance (iSCSI, RBD,
> >etc.),
> >> NFS just happens to be what we were testing.  If I am wrong and someone
> >can
> >> correct my understanding, I would really appreciate it.  Otherwise, I'm
> >> surprised we haven't had more reports of issues when block migrations
> >are
> >> used in conjunction with any attached volumes.
> >
> >Libvirt/QEMU has no special logic. When told to block-migrate, it will do
> >so for *all* disks attached to the VM in read-write-exclusive mode. It
> >will
> >only skip those marked read-only or read-write-shared mode. Even that
> >distinction is somewhat dubious and so not reliably what you would want.
> >
> >It seems like we should just disallow block migrate when any cinder
> >volumes
> >are attached to the VM, since there is never any valid use case for doing
> >block migrate from a cinder volume to itself.
> >
> >Regards,
> >Daniel
> >--
> >|: http://berrange.com      -o-
> >http://www.flickr.com/photos/dberrange/
> >:|
> >|: http://libvirt.org              -o-
> >http://virt-manager.org
> >:|
> >|: http://autobuild.org       -o-
> >http://search.cpan.org/~danberr/
> >:|
> >|: http://entangle-photo.org       -o-
> >http://live.gnome.org/gtk-vnc
> >:|
> >
> >_______________________________________________
> >OpenStack-dev mailing list
> >OpenStack-dev at lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/8
> >041b725/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 15
> >Date: Thu, 19 Jun 2014 15:40:37 +0530
> >From: AMIT PRAKASH PANDEY <amitpp23 at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for
> >       nova-core
> >Message-ID:
> >       <CADZpTrZevLW6fLSYTw3ePyeg+bpW0Oip8_Hy=yBAiwteFRnqUQ at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Congrats!
> >
> >
> >
> >On Thu, Jun 19, 2014 at 7:06 AM, wu jiang <wingwj at gmail.com> wrote:
> >
> >> Congratulation!
> >>
> >>
> >> On Wed, Jun 18, 2014 at 7:07 PM, Kenichi Oomichi <
> >> oomichi at mxs.nes.nec.co.jp> wrote:
> >>
> >>>
> >>> > -----Original Message-----
> >>> > From: Michael Still [mailto:mikal at stillhq.com]
> >>> > Sent: Wednesday, June 18, 2014 7:54 PM
> >>> > To: OpenStack Development Mailing List (not for usage questions)
> >>> > Subject: Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for
> >>> nova-core
> >>> >
> >>> > Kenichi has now been added to the nova-core group in gerrit. Welcome
> >>> aboard!
> >>>
> >>> Thank you for many +1s, and I'm glad to join the nova-core group :-)
> >>> I am going to try hard for the smooth development.
> >>>
> >>>
> >>> Thanks
> >>> Ken'ichi Ohmichi
> >>>
> >>> ---
> >>>
> >>> > On Tue, Jun 17, 2014 at 6:18 PM, Michael Still <mikal at stillhq.com>
> >>> wrote:
> >>> > > Hi. I'm going to let this sit for another 24 hours, and then we'll
> >>> > > declare it closed.
> >>> > >
> >>> > > Cheers,
> >>> > > Michael
> >>> > >
> >>> > > On Tue, Jun 17, 2014 at 6:16 AM, Mark McLoughlin
> >>><markmc at redhat.com>
> >>> wrote:
> >>> > >> On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
> >>> > >>> Greetings,
> >>> > >>>
> >>> > >>> I would like to nominate Ken'ichi Ohmichi for the nova-core team.
> >>> > >>>
> >>> > >>> Ken'ichi has been involved with nova for a long time now.  His
> >>> reviews
> >>> > >>> on API changes are excellent, and he's been part of the team that
> >>> has
> >>> > >>> driven the new API work we've seen in recent cycles forward.
> >>> Ken'ichi
> >>> > >>> has also been reviewing other parts of the code base, and I think
> >>> his
> >>> > >>> reviews are detailed and helpful.
> >>> > >>
> >>> > >> +1, great to see Ken'ichi join the team
> >>> > >>
> >>> > >> Mark.
> >>> > >>
> >>> > >>
> >>> > >> _______________________________________________
> >>> > >> OpenStack-dev mailing list
> >>> > >> OpenStack-dev at lists.openstack.org
> >>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> > >
> >>> > >
> >>> > >
> >>> > > --
> >>> > > Rackspace Australia
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Rackspace Australia
> >>> >
> >>> > _______________________________________________
> >>> > OpenStack-dev mailing list
> >>> > OpenStack-dev at lists.openstack.org
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/6
> >ab07e1e/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 16
> >Date: Thu, 19 Jun 2014 10:14:24 +0000
> >From: Kenichi Oomichi <oomichi at mxs.nes.nec.co.jp>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: [openstack-dev] [nova] Nova API meeting
> >Message-ID:
> >       <663E0F2C6F9EEA4AAD1DEEBD89158AAE01036DD7 at BPXM06GP.gisp.nec.co.jp>
> >Content-Type: text/plain; charset="iso-2022-jp"
> >
> >Hi,
> >
> >Chris will have day off tomorrow.
> >I'd like to run the next meeting instead.
> >
> >Just a reminder that the weekly Nova API meeting is being held tomorrow
> >Friday UTC 0000.
> >
> >We encourage cloud operators and those who use the REST API such as
> >SDK developers and others who and are interested in the future of the
> >API to participate.
> >
> >In other timezones the meeting is at:
> >
> >EST 20:00 (Thu)
> >Japan 09:00 (Fri)
> >China 08:00 (Fri)
> >ACDT 9:30 (Fri)
> >
> >The proposed agenda and meeting details are here:
> >
> >https://wiki.openstack.org/wiki/Meetings/NovaAPI
> >
> >Please feel free to add items to the agenda.
> >
> >
> >Thanks
> >Ken'ichi Ohmichi
> >
> >
> >
> >
> >------------------------------
> >
> >Message: 17
> >Date: Thu, 19 Jun 2014 18:26:19 +0800
> >From: Zang MingJie <zealot0630 at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
> >       architecture
> >Message-ID:
> >       <CAOrge3pdDZE-9xoo1PsKG7wM-o-NJO7COLkshEfau3JLGhVa3Q at mail.gmail.com>
> >Content-Type: text/plain; charset=UTF-8
> >
> >Hi:
> >
> >I don't like the idea of ResourceDriver and AgentDriver. I suggested
> >use a singleton worker thread to manager all underlying setup, so the
> >driver should do nothing other than fire a update event to the worker.
> >
> >The worker thread may looks like this one:
> >
> ># the only variable store all local state which survives between
> >different events, including lvm, fdb or whatever
> >state = {}
> >
> ># loop forever
> >while True:
> >    event = ev_queue.pop()
> >    if not event:
> >        sleep() # may be interrupted when new event comes
> >        continue
> >
> >    origin_state = state
> >    new_state = event.merge_state(state)
> >
> >    if event.is_ovsdb_changed():
> >        if event.is_tunnel_changed():
> >            setup_tunnel(new_state, old_state, event)
> >        if event.is_port_tags_changed():
> >            setup_port_tags(new_state, old_state, event)
> >
> >    if event.is_flow_changed():
> >        if event.is_flow_table_1_changed():
> >            setup_flow_table_1(new_state, old_state, event)
> >        if event.is_flow_table_2_changed():
> >            setup_flow_table_2(new_state, old_state, event)
> >        if event.is_flow_table_3_changed():
> >            setup_flow_table_3(new_state, old_state, event)
> >        if event.is_flow_table_4_changed():
> >            setup_flow_table_4(new_state, old_state, event)
> >
> >    if event.is_iptable_changed():
> >        if event.is_iptable_nat_changed():
> >            setup_iptable_nat(new_state, old_state, event)
> >        if event.is_iptable_filter_changed():
> >            setup_iptable_filter(new_state, old_state, event)
> >
> >   state = new_state
> >
> >when any part has been changed by a event, the corresponding setup_xxx
> >function rebuild the whole part, then use the restore like
> >`iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
> >part.
> >
> >
> >
> >------------------------------
> >
> >Message: 18
> >Date: Thu, 19 Jun 2014 10:40:55 +0000
> >From: Kenichi Oomichi <oomichi at mxs.nes.nec.co.jp>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [qa] Clarification of policy for qa-specs
> >       around adding new tests
> >Message-ID:
> >       <663E0F2C6F9EEA4AAD1DEEBD89158AAE01036E4A at BPXM06GP.gisp.nec.co.jp>
> >Content-Type: text/plain; charset="iso-2022-jp"
> >
> >
> >> -----Original Message-----
> >> From: Matthew Treinish [mailto:mtreinish at kortar.org]
> >> Sent: Wednesday, June 18, 2014 10:33 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [qa] Clarification of policy for qa-specs
> >>around adding new tests
> >>
> >> On Tue, Jun 17, 2014 at 01:45:55AM +0000, Kenichi Oomichi wrote:
> >> >
> >> > > -----Original Message-----
> >> > > From: Matthew Treinish [mailto:mtreinish at kortar.org]
> >> > > Sent: Monday, June 16, 2014 11:58 PM
> >> > > To: OpenStack Development Mailing List (not for usage questions)
> >> > > Subject: Re: [openstack-dev] [qa] Clarification of policy for
> >>qa-specs around adding new tests
> >> > >
> >> > > On Mon, Jun 16, 2014 at 10:46:51AM -0400, David Kranz wrote:
> >> > > > I have been reviewing some of these specs and sense a lack of
> >>clarity around
> >> > > > what is expected. In the pre-qa-specs world we did not want
> >>tempest
> >> > > > blueprints to be used by projects to track their tempest test
> >>submissions
> >> > > > because the core review team did not want to have to spend a lot
> >>of time
> >> > > > dealing with that. We said that each project could have one
> >>tempest
> >> > > > blueprint that would point to some other place (project
> >>blueprints,
> >> > > > spreadsheet, etherpad, etc.) that would track specific tests to
> >>be added.
> >> > > > I'm not sure what aspect of the new qa-spec process would make us
> >>feel
> >> > > > differently about this. Has this policy changed? We should spell
> >>out the
> >> > > > expectation in any event. I will update the README when we have a
> >> > > > conclusion.
> >> > > >
> >> > >
> >> > > The policy has not changed. There should be 1 BP (or maybe 2 or 3
> >>if they want
> >> > > to split the effort a bit more granularly for tracking different
> >>classes of
> >> > > tests, but still 1 BP series) for improving project tests. For
> >>individual tests
> >> > > part of a bigger effort should be tracked outside of the Tempest
> >>LP. IMO after
> >> > > it's approved the spec/BP for tracking test additions is only
> >>really useful to
> >> > > have a unified topic to use for review classification.
> >> >
> >> > +1 to use a single blueprint for adding new tests of each project.
> >> > The unified topic of each project would be useful to get each project
> >> > reviewers' effort on the Tempest tests reviews.
> >> > To add new tests, do we need to have qa-specs, or is it OK to have
> >> > blueprints only?
> >> >
> >>
> >> So I've been asking all the new BPs for project testing being opened
> >>this cycle
> >> to have a spec too. My feeling is that we should only have one process
> >>for doing
> >> BPs/specs that way we get all the artifacts in the same place. It
> >>should also
> >> hopefully get everyone more involved with the qa-specs workflow.
> >
> >I see, thank you for clarifying it.
> >
> >> The specs for adding project test should be pretty simple, they just
> >>basically
> >> need to outline what project is going to be tested, what types of tests
> >>are
> >> going to be worked on, (API, CLI, etc..)  and how the test development
> >>is going
> >> to be tracked. (etherpad, google doc, etc.)
> >
> >That is nice advice, it seems easy to review also :-)
> >
> >Thanks
> >Ken'ichi Ohmichi
> >
> >
> >
> >
> >------------------------------
> >
> >Message: 19
> >Date: Thu, 19 Jun 2014 06:43:36 -0400
> >From: Sean Dague <sean at dague.net>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [DevStack] fails in lxc running ubuntu
> >       trusty amd64
> >Message-ID: <53A2BED8.4040004 at dague.net>
> >Content-Type: text/plain; charset="utf-8"
> >
> >On 06/19/2014 03:03 AM, Mike Spreitzer wrote:
> >> In my linux containers running Ubuntu 14.04 64-bit, DevStack fails
> >> because it can not install the package named tgt.  The problem is that
> >> the install script invokes the tgt service's start operation, which
> >> launches the daemon (tgtd), and the launch fails with troubles with
> >> RDMA.  Has anybody tried such a thing?  Any fixes, workarounds?  Any
> >>ideas?
> >
> >My understanding is that's basically the blocker to running devstack
> >inside a container, whatever does iscsi fails in that environment.
> >
> >I do not yet know anyone that's solved this. I'd be super happy if
> >someone did. I'd love to be running devstack on my desktop in a container.
> >
> >If you completely disable cinder it might work. However that's not a
> >service collection that's really tested by anyone right now, so I expect
> >that will expose more issues.
> >
> >       -Sean
> >
> >--
> >Sean Dague
> >http://dague.net
> >
> >-------------- next part --------------
> >A non-text attachment was scrubbed...
> >Name: signature.asc
> >Type: application/pgp-signature
> >Size: 482 bytes
> >Desc: OpenPGP digital signature
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/3
> >79bffa4/attachment-0001.pgp>
> >
> >------------------------------
> >
> >Message: 20
> >Date: Thu, 19 Jun 2014 12:45:28 +0200
> >From: Swann Croiset <swannon at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [FUEL] Zabbix in MOS meeting notes
> >Message-ID:
> >       <CAEjdo88RN7P+zq_iwy9fTeK=FYX0nc94JAq9L=kmGRLNDCiHWA at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Hi,
> >
> >definitely nice to have an OS monitoring system alive with Fuel
> >deployments!
> >
> >I've some questions inline ..
> >
> >thanks
> >
> >2014-06-18 11:07 GMT+02:00 Alexander Kislitsky <akislitsky at mirantis.com>:
> >
> >> 18.07.2014
> >>
> >> Participants:
> >> Szymon Banka,
> >> Bartek Kupidura,
> >> Dmitry Nikishov
> >> Alexander Kislitsky
> >>
> >> Discussed limitation of current implementation, timelines, integration
> >> workflow.
> >> Colleagues going to build custom ISO with current Zabbix monitoring
> >> implementation, test it, add review comments.
> >>
> >
> >are you talking  about this blueprint and their reviews :
> >https://blueprints.launchpad.net/fuel/+spec/monitoring-system
> >
> >On next week we plan to review and probably merge improvement of Zabbix
> >> monitoring based on implementation for Ericsson.
> >> For HA clusters Zabbix server should be installed on the controller
> >>nodes.
> >> This requirement will be researched and implemented in nailgun part.
> >>
> >
> >I'm wondering why you must deploy zabbix on the controller nodes, do you
> >mean to have zabbix server in HA mode too ?
> >what about non HA clusters, is Zabbix server runs on an other node (ie
> >fuel
> >master) ?
> >
> >
> >
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/9
> >46a1b4e/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 21
> >Date: Thu, 19 Jun 2014 10:55:50 +0000
> >From: "Paul Michali (pcm)" <pcm at cisco.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [Neutron][L3] Team Meeting Thursday at
> >       1500 UTC
> >Message-ID: <4877FFF4-FC42-48CD-BDDA-7CBF5179D069 at cisco.com>
> >Content-Type: text/plain; charset="windows-1252"
> >
> >I can?t make the meeting today, but have updated the agenda for L3 vendor
> >stuff. Please request team to review the WIP code I have out.
> >
> >Thanks!
> >
> >PCM (Paul Michali)
> >
> >MAIL ?..?. pcm at cisco.com
> >IRC ??..? pcm_ (irc.freenode.com)
> >TW ???... @pmichali
> >GPG Key ? 4525ECC253E31A83
> >Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
> >
> >
> >
> >On Jun 18, 2014, at 4:27 PM, Brian Haley <brian.haley at hp.com> wrote:
> >
> >> The Neutron L3 Subteam will meet tomorrow at the regular time in
> >> #openstack-meeting-3.  The agenda [1] is posted, please update as
> >>needed.
> >>
> >> I'll be standing in for Carl as he's on vacation this week.
> >>
> >> Brian Haley
> >>
> >> [1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >-------------- next part --------------
> >A non-text attachment was scrubbed...
> >Name: signature.asc
> >Type: application/pgp-signature
> >Size: 842 bytes
> >Desc: Message signed with OpenPGP using GPGMail
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/d
> >c1ad4b3/attachment-0001.pgp>
> >
> >------------------------------
> >
> >Message: 22
> >Date: Thu, 19 Jun 2014 12:11:30 +0100
> >From: Duncan Thomas <duncan.thomas at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [TripleO] Backwards compatibility policy
> >       for     our projects
> >Message-ID:
> >       <CAOyZ2aFtC0BAebzApwBdiuDCWALtGuvoFJVWSCWCDqT5SveSUg at mail.gmail.com>
> >Content-Type: text/plain; charset=UTF-8
> >
> >On 19 June 2014 10:01, Giulio Fidente <gfidente at redhat.com> wrote:
> >
> >> From a the 10.000 feet view, I can imagine people relying on a stable
> >>API
> >> for services like Cinder but I don't see how that applies to TripleO
> >>
> >> Why should one try to install an older version of OpenStack using some
> >> 'recent' version TripleO?
> >>
> >> The only scenario which comes up to my mind is the 'upgrade' process
> >>where
> >> undercloud and overcloud could get 'out of sync', yet this seems to
> >>have a
> >> pretty limited scope.
> >>
> >> A genuine question from a 'wannabe' TripleO contributor, do you see
> >>others?
> >> many others?
> >
> >There are companies (e.g. HP) who're trying to ship products based on
> >TripleO. This means a) elements that are not upstreamed because they
> >install proprietary / value-add code b) elements that are carrying
> >changes that are not yet merged upstream because the velocity of
> >upstream is very low. While I'm not suggesting you should bend over
> >backwards to accommodate such users, some consideration is a
> >reasonable expectation I think, particularly for (a).
> >
> >
> >
> >------------------------------
> >
> >Message: 23
> >Date: Thu, 19 Jun 2014 15:20:53 +0400
> >From: Alexander Kislitsky <akislitsky at mirantis.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: Re: [openstack-dev] [FUEL] Zabbix in MOS meeting notes
> >Message-ID:
> >       <CAHWr6fkGXev3oxuK1MQ0rT4LPvLBEDzjnAZjVKyA-Ba0kxDYwA at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >Hi!
> >
> >You are right, we are talking about
> >https://blueprints.launchpad.net/fuel/+spec/monitoring-system
> >
> >We want to estimate required time for merging HA implementation into
> >current code. If it possible we will try to distribute HA solution in
> >current release. In other case it will be delivered in next releases. In
> >current implementation (non HA) Zabbix server is deployed onto separate
> >node.
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/3
> >966c56d/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 24
> >Date: Thu, 19 Jun 2014 16:51:30 +0530
> >From: shiva m <anjaneya2 at gmail.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: [openstack-dev] Help in writing new neutron plugin
> >Message-ID:
> >       <CAGm0vUqEUvw0-mM4rEiyUa6EKW5Eq0oCkXYUHWv73-DA9yW6xA at mail.gmail.com>
> >Content-Type: text/plain; charset="utf-8"
> >
> >HI,
> >
> >I am looking at documents to understand how to write a new neutron plugin.
> >I working with devstack with ryu setup from last 4 months. I am trying to
> >write some test plugin to get hands-on.
> >
> >Can any one please help where to start looking neutron code, where to
> >embed
> >new code. Also I am looking for beginner kind of documentation to write
> >ML2
> >type driver and mechanism drivers,
> >
> >
> >Thanks
> >Shiva
> >-------------- next part --------------
> >An HTML attachment was scrubbed...
> >URL:
> ><http://lists.openstack.org/pipermail/openstack-dev/attachments/20140619/d
> >2cce81c/attachment-0001.html>
> >
> >------------------------------
> >
> >Message: 25
> >Date: Thu, 19 Jun 2014 12:35:04 +0100
> >From: Duncan Thomas <duncan.thomas at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >       <openstack-dev at lists.openstack.org>
> >Subject: Re: [openstack-dev] [nova][libvirt] Block migrations and
> >       Cinder  volumes
> >Message-ID:
> >       <CAOyZ2aEOqXYAUC01MXTKv_kfOtvcyF6pkpHGVQHsfo9tVrkDPQ at mail.gmail.com>
> >Content-Type: text/plain; charset=UTF-8
> >
> >I think here there are two different processes making us of libvirt
> >block migration:
> >
> >1) Instance migration, which should not do anything with cinder volumes
> >2) Cinder live migration between backends, which is what I think Ronen
> >Kat is referring to
> >
> >On 19 June 2014 11:06, Ronen Kat <RONENKAT at il.ibm.com> wrote:
> >> The use-case for block migration in Libvirt/QEMU is to allow migration
> >> between two different back-ends.
> >> This is basically a host based volume migration, ESXi has a similar
> >> functionality (storage vMotion), but probably not enabled with
> >>OpenStack.
> >> Btw, if the Cinder volume driver can migrate the volume by itself, the
> >> Libvirt/QEMU is not called upon, but if it can't (different vendor boxes
> >> don't talk to each other), then Cinder asks Nova to help move the
> >>data...
> >>
> >> If you are missing this host based process you are basically have a
> >>"data
> >> lock-in" on a specific back-end - the use case could be storage
> >>evacuation,
> >> or just moving the data to a different box.
> >>
> >> Ronen,
> >>
> >>
> >>
> >> From:        "Daniel P. Berrange" <berrange at redhat.com>
> >> To:        "OpenStack Development Mailing List (not for usage
> >>questions)"
> >> <openstack-dev at lists.openstack.org>,
> >> Date:        19/06/2014 11:42 AM
> >> Subject:        Re: [openstack-dev] [nova][libvirt] Block migrations and
> >> Cinder volumes
> >> ________________________________
> >>
> >>
> >>
> >> On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
> >>> I am concerned about how block migration functions when Cinder volumes
> >>>are
> >>> attached to an instance being migrated.  We noticed some unexpected
> >>> behavior recently, whereby attached generic NFS-based volumes would
> >>>become
> >>> entirely unsparse over the course of a migration.  After spending some
> >>> time
> >>> reviewing the code paths in Nova, I'm more concerned that this was
> >>> actually
> >>> a minor symptom of a much more significant issue.
> >>>
> >>> For those unfamiliar, NFS-based volumes are simply RAW files residing
> >>>on
> >>> an
> >>> NFS mount.  From Libvirt's perspective, these volumes look no different
> >>> than root or ephemeral disks.  We are currently not filtering out
> >>>volumes
> >>> whatsoever when making the request into Libvirt to perform the
> >>>migration.
> >>>  Libvirt simply receives an additional flag
> >>>(VIR_MIGRATE_NON_SHARED_INC)
> >>> when a block migration is requested, which applied to the entire
> >>>migration
> >>> process, not differentiated on a per-disk basis.  Numerous guards
> >>>within
> >>> Nova to prevent a block based migration from being allowed if the
> >>>instance
> >>> disks exist on the destination; yet volumes remain attached and within
> >>>the
> >>> defined XML during a block migration.
> >>>
> >>> Unless Libvirt has a lot more logic around this than I am lead to
> >>>believe,
> >>> this seems like a recipe for corruption.  It seems as though this would
> >>> also impact any type of volume attached to an instance (iSCSI, RBD,
> >>>etc.),
> >>> NFS just happens to be what we were testing.  If I am wrong and someone
> >>> can
> >>> correct my understanding, I would really appreciate it.  Otherwise, I'm
> >>> surprised we haven't had more reports of issues when block migrations
> >>>are
> >>> used in conjunction with any attached volumes.
> >>
> >> Libvirt/QEMU has no special logic. When told to block-migrate, it will
> >>do
> >> so for *all* disks attached to the VM in read-write-exclusive mode. It
> >>will
> >> only skip those marked read-only or read-write-shared mode. Even that
> >> distinction is somewhat dubious and so not reliably what you would want.
> >>
> >> It seems like we should just disallow block migrate when any cinder
> >>volumes
> >> are attached to the VM, since there is never any valid use case for
> >>doing
> >> block migrate from a cinder volume to itself.
> >>
> >> Regards,
> >> Daniel
> >> --
> >> |: http://berrange.com      -o-
> >>http://www.flickr.com/photos/dberrange/
> >> :|
> >> |: http://libvirt.org              -o-
> >>http://virt-manager.org
> >> :|
> >> |: http://autobuild.org       -o-
> >>http://search.cpan.org/~danberr/
> >> :|
> >> |: http://entangle-photo.org       -o-
> >>http://live.gnome.org/gtk-vnc
> >> :|
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >--
> >Duncan Thomas
> >
> >
> >
> >------------------------------
> >
> >Message: 26
> >Date: Thu, 19 Jun 2014 07:53:37 -0400
> >From: Davanum Srinivas <davanum at gmail.com>
> >To: openstack-dev at lists.openstack.org
> >Subject: [openstack-dev] [oslo] oslo.utils and oslo.local
> >Message-ID:
> >       <CANw6fcGovTQsuvkLjdDc7fVGwREdV+AMVohxis5yABZadZbJrQ at mail.gmail.com>
> >Content-Type: text/plain; charset=UTF-8
> >
> >Hi,
> >
> >Please review the following git repo(s):
> >https://github.com/dims/oslo.local
> >https://github.com/dims/oslo.utils
> >
> >the corresponding specs are here:
> >https://review.openstack.org/#/c/98431/
> >https://review.openstack.org/#/c/99028/
> >
> >Thanks,
> >dims
> >
> >--
> >Davanum Srinivas :: http://davanum.wordpress.com
> >
> >
> >
> >------------------------------
> >
> >_______________________________________________
> >OpenStack-dev mailing list
> >OpenStack-dev at lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >End of OpenStack-dev Digest, Vol 26, Issue 63
> >*********************************************
> 



More information about the OpenStack-dev mailing list